↓ Skip to main content

A general framework for comparative Bayesian meta-analysis of diagnostic studies

Overview of attention for article published in BMC Medical Research Methodology, August 2015
Altmetric Badge

About this Attention Score

  • Above-average Attention Score compared to outputs of the same age (63rd percentile)

Mentioned by

1 Q&A thread


27 Dimensions

Readers on

40 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
A general framework for comparative Bayesian meta-analysis of diagnostic studies
Published in
BMC Medical Research Methodology, August 2015
DOI 10.1186/s12874-015-0061-7
Pubmed ID

Joris Menten, Emmanuel Lesaffre


Selecting the most effective diagnostic method is essential for patient management and public health interventions. This requires evidence of the relative performance of alternative tests or diagnostic algorithms. Consequently, there is a need for diagnostic test accuracy meta-analyses allowing the comparison of the accuracy of two or more competing tests. The meta-analyses are however complicated by the paucity of studies that directly compare the performance of diagnostic tests. A second complication is that the diagnostic accuracy of the tests is usually determined through the comparison of the index test results with those of a reference standard. These reference standards are presumed to be perfect, i.e. allowing the classification of diseased and non-diseased subjects without error. In practice, this assumption is however rarely valid and most reference standards show false positive or false negative results. When an imperfect reference standard is used, the estimated accuracy of the tests of interest may be biased, as well as the comparisons between these tests. We propose a model that allows for the comparison of the accuracy of two diagnostic tests using direct (head-to-head) comparisons as well as indirect comparisons through a third test. In addition, the model allows and corrects for imperfect reference tests. The model is inspired by mixed-treatment comparison meta-analyses that have been developed for the meta-analysis of randomized controlled trials. As the model is estimated using Bayesian methods, it can incorporate prior knowledge on the diagnostic accuracy of the reference tests used. We show the bias that can result from using inappropriate methods in the meta-analysis of diagnostic tests and how our method provides more correct estimates of the difference in diagnostic accuracy between two tests. As an illustration, we apply this model to a dataset on visceral leishmaniasis diagnostic tests, comparing the accuracy of the RK39 dipstick with that of the direct agglutination test. Our proposed meta-analytic model can improve the comparison of the diagnostic accuracy of competing tests in a systematic review. This is however only true if the studies and especially information on the reference tests used are sufficiently detailed. More specifically, the type and exact procedures used as reference tests are needed, including any cut-offs used and the number of subjects excluded from full reference test assessment. If this information is lacking, it may be better to limit the meta-analysis to direct comparisons.

Mendeley readers

The data shown below were compiled from readership statistics for 40 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United States 1 3%
Unknown 39 98%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 12 30%
Researcher 10 25%
Student > Master 4 10%
Student > Bachelor 2 5%
Professor > Associate Professor 2 5%
Other 3 8%
Unknown 7 18%
Readers by discipline Count As %
Medicine and Dentistry 10 25%
Mathematics 4 10%
Nursing and Health Professions 2 5%
Linguistics 2 5%
Social Sciences 2 5%
Other 6 15%
Unknown 14 35%

Attention Score in Context

This research output has an Altmetric Attention Score of 3. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 21 June 2016.
All research outputs
of 13,728,183 outputs
Outputs from BMC Medical Research Methodology
of 1,265 outputs
Outputs of similar age
of 238,953 outputs
Outputs of similar age from BMC Medical Research Methodology
of 1 outputs
Altmetric has tracked 13,728,183 research outputs across all sources so far. This one is in the 47th percentile – i.e., 47% of other outputs scored the same or lower than it.
So far Altmetric has tracked 1,265 research outputs from this source. They typically receive more attention than average, with a mean Attention Score of 9.1. This one is in the 43rd percentile – i.e., 43% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 238,953 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 63% of its contemporaries.
We're also able to compare this research output to 1 others from the same source and published within six weeks on either side of this one. This one has scored higher than all of them