↓ Skip to main content

Validity of a new assessment rubric for a short-answer test of clinical reasoning

Overview of attention for article published in BMC Medical Education, July 2016
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age

Mentioned by

twitter
3 X users

Citations

dimensions_citation
10 Dimensions

Readers on

mendeley
67 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Validity of a new assessment rubric for a short-answer test of clinical reasoning
Published in
BMC Medical Education, July 2016
DOI 10.1186/s12909-016-0714-1
Pubmed ID
Authors

Euson Yeung, Kulamakan Kulasagarem, Nicole Woods, Adam Dubrowski, Brian Hodges, Heather Carnahan

Abstract

The validity of high-stakes decisions derived from assessment results is of primary concern to candidates and certifying institutions in the health professions. In the field of orthopaedic manual physical therapy (OMPT), there is a dearth of documented validity evidence to support the certification process particularly for short-answer tests. To address this need, we examined the internal structure of the Case History Assessment Tool (CHAT); this is a new assessment rubric developed to appraise written responses to a short-answer test of clinical reasoning in post-graduate OMPT certification in Canada. Fourteen physical therapy students (novices) and 16 physical therapists (PT) with minimal and substantial OMPT training respectively completed a mock examination. Four pairs of examiners (n = 8) participated in appraising written responses using the CHAT. We conducted separate generalizability studies (G studies) for all participants and also by level of OMPT training. Internal consistency was calculated for test questions with more than 2 assessment items. Decision studies were also conducted to determine optimal application of the CHAT for OMPT certification. The overall reliability of CHAT scores was found to be moderate; however, reliability estimates for the novice group suggest that the scale was incapable of accommodating for scores of novices. Internal consistency estimates indicate item redundancies for several test questions which will require further investigation. Future validity studies should consider discriminating the clinical reasoning competence of OMPT trainees strictly at the post-graduate level. Although rater variance was low, the large variance attributed to error sources not incorporated in our G studies warrant further investigations into other threats to validity. Future examination of examiner stringency is also warranted.

X Demographics

X Demographics

The data shown below were collected from the profiles of 3 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 67 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United States 1 1%
Unknown 66 99%

Demographic breakdown

Readers by professional status Count As %
Other 7 10%
Researcher 6 9%
Professor > Associate Professor 6 9%
Student > Doctoral Student 6 9%
Lecturer 5 7%
Other 22 33%
Unknown 15 22%
Readers by discipline Count As %
Medicine and Dentistry 19 28%
Nursing and Health Professions 14 21%
Social Sciences 6 9%
Engineering 3 4%
Economics, Econometrics and Finance 2 3%
Other 6 9%
Unknown 17 25%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 2. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 20 September 2016.
All research outputs
#14,268,952
of 22,881,964 outputs
Outputs from BMC Medical Education
#1,967
of 3,337 outputs
Outputs of similar age
#214,460
of 365,298 outputs
Outputs of similar age from BMC Medical Education
#44
of 66 outputs
Altmetric has tracked 22,881,964 research outputs across all sources so far. This one is in the 35th percentile – i.e., 35% of other outputs scored the same or lower than it.
So far Altmetric has tracked 3,337 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 6.3. This one is in the 36th percentile – i.e., 36% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 365,298 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 38th percentile – i.e., 38% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 66 others from the same source and published within six weeks on either side of this one. This one is in the 28th percentile – i.e., 28% of its contemporaries scored the same or lower than it.