↓ Skip to main content

Does what we write matter? Determining the features of high- and low-quality summative written comments of students on the internal medicine clerkship using pile-sort and consensus analysis: a mixed-me…

Overview of attention for article published in BMC Medical Education, May 2016
Altmetric Badge

About this Attention Score

  • Above-average Attention Score compared to outputs of the same age (51st percentile)
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

twitter
3 X users

Citations

dimensions_citation
16 Dimensions

Readers on

mendeley
60 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Does what we write matter? Determining the features of high- and low-quality summative written comments of students on the internal medicine clerkship using pile-sort and consensus analysis: a mixed-methods study
Published in
BMC Medical Education, May 2016
DOI 10.1186/s12909-016-0660-y
Pubmed ID
Authors

Lauren Gulbas, William Guerin, Hilary F. Ryder

Abstract

Written comments by medical student supervisors provide written foundation for grade narratives and deans' letters and play an important role in student's professional development. Written comments are widely used but little has been published about the quality of written comments. We hypothesized that medical students share an understanding of qualities inherent to a high-quality and a low-quality narrative comment and we aimed to determine the features that define high- and low-quality comments. Using the well-established anthropological pile-sort method, medical students sorted written comments into 'helpful' and 'unhelpful' piles, then were interviewed to determine how they evaluated comments. We used multidimensional scaling and cluster analysis to analyze data, revealing how written comments were sorted across student participants. We calculated the degree of shared knowledge to determine the level of internal validity in the data. We transcribed and coded data elicited during the structured interview to contextualize the student's answers. Length of comment was compared using one-way analysis of variance; valence and frequency comments were thought of as helpful were analyzed by chi-square. Analysis of written comments revealed four distinct clusters. Cluster A comments reinforced good behaviors or gave constructive criticism for how changes could be made. Cluster B comments exhorted students to continue non-specific behaviors already exhibited. Cluster C comments used grading rubric terms without giving student-specific examples. Cluster D comments used sentence fragments lacking verbs and punctuation. Student data exhibited a strong fit to the consensus model, demonstrating that medical students share a robust model of attributes of helpful and unhelpful comments. There was no correlation between valence of comment and perceived helpfulness. Students find comments demonstrating knowledge of the student and providing specific examples of appropriate behavior to be reinforced or inappropriate behavior to be eliminated helpful, and comments that are non-actionable and non-specific to be least helpful. Our research and analysis allow us to make recommendations helpful for faculty development around written feedback.

X Demographics

X Demographics

The data shown below were collected from the profiles of 3 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 60 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Spain 1 2%
Unknown 59 98%

Demographic breakdown

Readers by professional status Count As %
Researcher 9 15%
Student > Master 6 10%
Student > Doctoral Student 6 10%
Unspecified 5 8%
Lecturer 5 8%
Other 18 30%
Unknown 11 18%
Readers by discipline Count As %
Medicine and Dentistry 27 45%
Social Sciences 7 12%
Unspecified 5 8%
Nursing and Health Professions 4 7%
Computer Science 2 3%
Other 3 5%
Unknown 12 20%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 3. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 31 May 2016.
All research outputs
#13,118,240
of 22,870,727 outputs
Outputs from BMC Medical Education
#1,588
of 3,336 outputs
Outputs of similar age
#150,178
of 312,377 outputs
Outputs of similar age from BMC Medical Education
#38
of 63 outputs
Altmetric has tracked 22,870,727 research outputs across all sources so far. This one is in the 42nd percentile – i.e., 42% of other outputs scored the same or lower than it.
So far Altmetric has tracked 3,336 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 6.3. This one has gotten more attention than average, scoring higher than 51% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 312,377 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 51% of its contemporaries.
We're also able to compare this research output to 63 others from the same source and published within six weeks on either side of this one. This one is in the 39th percentile – i.e., 39% of its contemporaries scored the same or lower than it.