↓ Skip to main content

Does faculty development influence the quality of in-training evaluation reports in pharmacy?

Overview of attention for article published in BMC Medical Education, November 2017
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age

Mentioned by

twitter
4 X users

Citations

dimensions_citation
3 Dimensions

Readers on

mendeley
31 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Does faculty development influence the quality of in-training evaluation reports in pharmacy?
Published in
BMC Medical Education, November 2017
DOI 10.1186/s12909-017-1054-5
Pubmed ID
Authors

Kerry Wilbur

Abstract

In-training evaluation reports (ITERs) of student workplace-based learning are completed by clinical supervisors across various health disciplines. However, outside of medicine, the quality of submitted workplace-based assessments is largely uninvestigated. This study assessed the quality of ITERs in pharmacy and whether clinical supervisors could be trained to complete higher quality reports. A random sample of ITERs submitted in a pharmacy program during 2013-2014 was evaluated. These ITERs served as a historical control (control group 1) for comparison with ITERs submitted in 2015-2016 by clinical supervisors who participated in an interactive faculty development workshop (intervention group) and those who did not (control group 2). Two trained independent raters scored the ITERs using a previously validated nine-item scale assessing report quality, the Completed Clinical Evaluation Report Rating (CCERR). The scoring scale for each item is anchored at 1 ("not at all") and 5 ("exemplary"), with 3 categorized as "acceptable". Mean CCERR score for reports completed after the workshop (22.9 ± 3.39) did not significantly improve when compared to prospective control group 2 (22.7 ± 3.63, p = 0.84) and were worse than historical control group 1 (37.9 ± 8.21, p = 0.001). Mean item scores for individual CCERR items were below acceptable thresholds for 5 of the 9 domains in control group 1, including supervisor documented evidence of specific examples to clearly explain weaknesses and concrete recommendations for student improvement. Mean item scores for individual CCERR items were below acceptable thresholds for 6 and 7 of the 9 domains in control group 2 and the intervention group, respectively. This study is the first using CCERR to evaluate ITER quality outside of medicine. Findings demonstrate low baseline CCERR scores in a pharmacy program not demonstrably changed by a faculty development workshop, but strategies are identified to augment future rater training.

X Demographics

X Demographics

The data shown below were collected from the profiles of 4 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 31 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 31 100%

Demographic breakdown

Readers by professional status Count As %
Other 4 13%
Lecturer 3 10%
Professor 3 10%
Professor > Associate Professor 3 10%
Student > Ph. D. Student 2 6%
Other 4 13%
Unknown 12 39%
Readers by discipline Count As %
Medicine and Dentistry 7 23%
Nursing and Health Professions 6 19%
Pharmacology, Toxicology and Pharmaceutical Science 2 6%
Business, Management and Accounting 2 6%
Social Sciences 2 6%
Other 1 3%
Unknown 11 35%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 2. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 02 December 2017.
All research outputs
#14,085,315
of 23,008,860 outputs
Outputs from BMC Medical Education
#1,902
of 3,365 outputs
Outputs of similar age
#228,151
of 437,733 outputs
Outputs of similar age from BMC Medical Education
#69
of 96 outputs
Altmetric has tracked 23,008,860 research outputs across all sources so far. This one is in the 37th percentile – i.e., 37% of other outputs scored the same or lower than it.
So far Altmetric has tracked 3,365 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 6.3. This one is in the 41st percentile – i.e., 41% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 437,733 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 46th percentile – i.e., 46% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 96 others from the same source and published within six weeks on either side of this one. This one is in the 22nd percentile – i.e., 22% of its contemporaries scored the same or lower than it.