↓ Skip to main content

Development and evaluation of a standardized peer-training in the context of peer review for quality assurance in work capacity evaluation

Overview of attention for article published in BMC Medical Education, June 2018
Altmetric Badge

Mentioned by

twitter
2 X users

Citations

dimensions_citation
3 Dimensions

Readers on

mendeley
41 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Development and evaluation of a standardized peer-training in the context of peer review for quality assurance in work capacity evaluation
Published in
BMC Medical Education, June 2018
DOI 10.1186/s12909-018-1233-z
Pubmed ID
Authors

André Strahl, Christian Gerlich, Georg W. Alpers, Katja Ehrmann, Jörg Gehrke, Annette Müller-Garnn, Heiner Vogel

Abstract

The German quality assurance programme for evaluating work capacity is based on peer review that evaluates the quality of medical experts' reports. Low reliability is thought to be due to systematic differences among peers. For this purpose, we developed a curriculum for a standardized peer-training (SPT). This study investigates, whether the SPT increases the inter-rater reliability of social medical physicians participating in a cross-institutional peer review. Forty physicians from 16 regional German Pension Insurances were subjected to SPT. The three-day training course consist of nine educational objectives recorded in a training manual. The SPT is split into a basic module providing basic information about the peer review and an advanced module for small groups of up to 12 peers training peer review using medical reports. Feasibility was tested by assessing selection, comprehensibility and subjective use of contents delivered, the trainers' delivery and design of training materials. The effectiveness of SPT was determined by evaluating peer concordance using three anonymised medical reports assessed by each peer. Percentage agreement and Fleiss' kappa (κm) were calculated. Concordance was compared with review results from a previous unstructured, non-standardized peer-training programme (control condition) performed by 19 peers from 12 German Pension Insurances departments. The control condition focused exclusively on the application of peer review in small groups. No specifically training materials, methods and trainer instructions were used. Peer-training was shown to be feasible. The level of subjective confidence in handling the peer review instrument varied between 70 and 90%. Average percentage agreement for the main outcome criterion was 60.2%, resulting in a κm of 0.39. By comparison, the average percentage concordance was 40.2% and the κm was 0.12 for the control condition. Concordance with the main criterion was relevant but not significant (p = 0.2) higher for SPT than for the control condition. Fleiss' kappa coefficient showed that peer concordance was higher for SPT than randomly expected. Nevertheless, a score of 0.39 for the main criterion indicated only fair inter-rater reliability, considerably lower than the conventional standard of 0.7 for adequate reliability.

X Demographics

X Demographics

The data shown below were collected from the profiles of 2 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 41 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 41 100%

Demographic breakdown

Readers by professional status Count As %
Student > Master 6 15%
Student > Doctoral Student 5 12%
Student > Ph. D. Student 4 10%
Lecturer 2 5%
Student > Postgraduate 2 5%
Other 5 12%
Unknown 17 41%
Readers by discipline Count As %
Medicine and Dentistry 7 17%
Social Sciences 3 7%
Engineering 3 7%
Psychology 2 5%
Nursing and Health Professions 1 2%
Other 9 22%
Unknown 16 39%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 17 June 2018.
All research outputs
#17,980,413
of 23,090,520 outputs
Outputs from BMC Medical Education
#2,646
of 3,384 outputs
Outputs of similar age
#237,296
of 328,585 outputs
Outputs of similar age from BMC Medical Education
#70
of 88 outputs
Altmetric has tracked 23,090,520 research outputs across all sources so far. This one is in the 19th percentile – i.e., 19% of other outputs scored the same or lower than it.
So far Altmetric has tracked 3,384 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 6.4. This one is in the 17th percentile – i.e., 17% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 328,585 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 22nd percentile – i.e., 22% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 88 others from the same source and published within six weeks on either side of this one. This one is in the 1st percentile – i.e., 1% of its contemporaries scored the same or lower than it.