↓ Skip to main content

The predictive validity of a situational judgement test and multiple-mini interview for entry into postgraduate training in Australia

Overview of attention for article published in BMC Medical Education, March 2016
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (81st percentile)
  • High Attention Score compared to outputs of the same age and source (84th percentile)

Mentioned by

twitter
13 X users
facebook
1 Facebook page

Citations

dimensions_citation
51 Dimensions

Readers on

mendeley
120 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
The predictive validity of a situational judgement test and multiple-mini interview for entry into postgraduate training in Australia
Published in
BMC Medical Education, March 2016
DOI 10.1186/s12909-016-0606-4
Pubmed ID
Authors

Fiona Patterson, Emma Rowett, Robert Hale, Marcia Grant, Chris Roberts, Fran Cousans, Stuart Martin

Abstract

Evidence for the predictive validity of situational judgement tests (SJTs) and multiple-mini interviews (MMIs) is well-established in undergraduate selection contexts, however at present there is less evidence to support the validity of their use in postgraduate settings. More research is also required to assess the extent to which SJTs and MMIs are complementary for predicting performance in practice. This study represents the first longitudinal evaluation of the complementary roles and predictive validity of an SJT and an MMI for selection for entry into postgraduate General Practice (GP) specialty training in Australia. Longitudinal data was collected from 443 GP registrars in Australia who were selected into GP training in 2010 or 2011. All 17 Regional Training Providers in Australia were asked to participate; performance data were received from 13 of these. Data was collected for participants' end-of-training assessment performance. Outcome measures include GP registrars' performance on the Royal Australian College of General Practitioners (RACGP) applied knowledge test, key feature problems and an objective structured clinical exam. Performance on the SJT, MMI and the overall selection score significantly predicted all three end-of-training assessments (r = .12 to .54), indicating that both of the selection methods, as well the overall selection score, have good predictive validity. The SJT and MMI provide incremental validity over each other for two of the three end-of-training assessments. The SJT and MMI were both significant positive predictors of all end-of-training assessments. Results provide evidence that they are complementary in predicting end-of-training assessment scores. This research adds to the limited literature at present regarding the predictive validity of postgraduate medical selection methods, and their comparable effectiveness when used in a single selection system. A future research agenda is proposed.

X Demographics

X Demographics

The data shown below were collected from the profiles of 13 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 120 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 120 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 15 13%
Student > Bachelor 15 13%
Student > Master 14 12%
Other 9 8%
Lecturer 8 7%
Other 34 28%
Unknown 25 21%
Readers by discipline Count As %
Medicine and Dentistry 44 37%
Psychology 19 16%
Social Sciences 6 5%
Nursing and Health Professions 5 4%
Sports and Recreations 3 3%
Other 14 12%
Unknown 29 24%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 9. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 03 May 2017.
All research outputs
#3,571,394
of 22,854,458 outputs
Outputs from BMC Medical Education
#572
of 3,327 outputs
Outputs of similar age
#56,604
of 300,116 outputs
Outputs of similar age from BMC Medical Education
#13
of 84 outputs
Altmetric has tracked 22,854,458 research outputs across all sources so far. Compared to these this one has done well and is in the 84th percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 3,327 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 6.3. This one has done well, scoring higher than 82% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 300,116 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 81% of its contemporaries.
We're also able to compare this research output to 84 others from the same source and published within six weeks on either side of this one. This one has done well, scoring higher than 84% of its contemporaries.