↓ Skip to main content

“On the same page”? The effect of GP examiner feedback on differences in rating severity in clinical assessments: a pre/post intervention study

Overview of attention for article published in BMC Medical Education, June 2017
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

twitter
3 X users

Citations

dimensions_citation
14 Dimensions

Readers on

mendeley
34 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
“On the same page”? The effect of GP examiner feedback on differences in rating severity in clinical assessments: a pre/post intervention study
Published in
BMC Medical Education, June 2017
DOI 10.1186/s12909-017-0929-9
Pubmed ID
Authors

Nancy Sturman, Remo Ostini, Wai Yee Wong, Jianzhen Zhang, Michael David

Abstract

Robust and defensible clinical assessments attempt to minimise differences in student grades which are due to differences in examiner severity (stringency and leniency). Unfortunately there is little evidence to date that examiner training and feedback interventions are effective; "physician raters" have indeed been deemed "impervious to feedback". Our aim was to investigate the effectiveness of a general practitioner examiner feedback intervention, and explore examiner attitudes to this. Sixteen examiners were provided with a written summary of all examiner ratings in medical student clinical case examinations over the preceding 18 months, enabling them to identify their own rating data and compare it with other examiners. Examiner ratings and examiner severity self-estimates were analysed pre and post intervention, using non-parametric bootstrapping, multivariable linear regression, intra-class correlation and Spearman's correlation analyses. Examiners completed a survey exploring their perceptions of the usefulness and acceptability of the intervention, including what (if anything) examiners planned to do differently as a result of the feedback. Examiner severity self-estimates were relatively poorly correlated with measured severity on the two clinical case examination types pre-intervention (0.29 and 0.67) and were less accurate post-intervention. No significant effect of the intervention was identified, when differences in case difficulty were controlled for, although there were fewer outlier examiners post-intervention. Drift in examiner severity over time prior to the intervention was observed. Participants rated the intervention as interesting and useful, and survey comments indicated that fairness, reassurance, and understanding examiner colleagues are important to examiners. Despite our participants being receptive to our feedback and wanting to be "on the same page", we did not demonstrate effective use of the feedback to change their rating behaviours. Calibration of severity appears to be difficult for examiners, and further research into better ways of providing more effective feedback is indicated.

X Demographics

X Demographics

The data shown below were collected from the profiles of 3 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 34 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Malaysia 1 3%
Unknown 33 97%

Demographic breakdown

Readers by professional status Count As %
Student > Master 7 21%
Student > Bachelor 6 18%
Other 3 9%
Student > Postgraduate 3 9%
Lecturer 2 6%
Other 4 12%
Unknown 9 26%
Readers by discipline Count As %
Medicine and Dentistry 12 35%
Nursing and Health Professions 3 9%
Psychology 2 6%
Computer Science 1 3%
Arts and Humanities 1 3%
Other 5 15%
Unknown 10 29%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 2. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 11 June 2017.
All research outputs
#14,350,775
of 22,979,862 outputs
Outputs from BMC Medical Education
#1,969
of 3,352 outputs
Outputs of similar age
#177,034
of 317,259 outputs
Outputs of similar age from BMC Medical Education
#31
of 55 outputs
Altmetric has tracked 22,979,862 research outputs across all sources so far. This one is in the 35th percentile – i.e., 35% of other outputs scored the same or lower than it.
So far Altmetric has tracked 3,352 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 6.3. This one is in the 36th percentile – i.e., 36% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 317,259 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 41st percentile – i.e., 41% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 55 others from the same source and published within six weeks on either side of this one. This one is in the 36th percentile – i.e., 36% of its contemporaries scored the same or lower than it.