↓ Skip to main content

Improving search efficiency for systematic reviews of diagnostic test accuracy: an exploratory study to assess the viability of limiting to MEDLINE, EMBASE and reference checking

Overview of attention for article published in Systematic Reviews, June 2015
Altmetric Badge

About this Attention Score

  • In the top 5% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (94th percentile)
  • High Attention Score compared to outputs of the same age and source (81st percentile)

Mentioned by

blogs
1 blog
twitter
39 X users

Citations

dimensions_citation
23 Dimensions

Readers on

mendeley
53 Mendeley
citeulike
3 CiteULike
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Improving search efficiency for systematic reviews of diagnostic test accuracy: an exploratory study to assess the viability of limiting to MEDLINE, EMBASE and reference checking
Published in
Systematic Reviews, June 2015
DOI 10.1186/s13643-015-0074-7
Pubmed ID
Authors

Louise Preston, Christopher Carroll, Paolo Gardois, Suzy Paisley, Eva Kaltenthaler

Abstract

Increasing numbers of systematic reviews evaluating the diagnostic test accuracy of technologies are being published. Currently, review teams tend to apply conventional systematic review standards to identify relevant studies for inclusion, for example sensitive searches of multiple bibliographic databases. There has been little evaluation of the efficiency of searching only one or two such databases for this type of review. The aim of this study was to assess the viability of an approach that restricted searches to MEDLINE, EMBASE and the reference lists of included studies. A convenience sample of nine Health Technology Assessment (HTA) systematic reviews of diagnostic test accuracy, with 302 included citations, was analysed to determine the number and proportion of included citations that were indexed in and retrieved from MEDLINE and EMBASE. An assessment was also made of the number and proportion of citations not retrieved from these databases but that could have been identified from the reference lists of included citations. 287/302 (95 %) of the included citations in the nine reviews were indexed across MEDLINE and EMBASE. The reviews' searches of MEDLINE and EMBASE accounted for 85 % of the included citations (256/302). Of the forty-six (15 %) included citations not retrieved by the published searches, 24 (8 %) could be found in the reference lists of included citations. Only 22/302 (7 %) of the included citations were not found by the proposed, more efficient approach. The proposed approach would have accounted for 280/302 (93 %) of included citations in this sample of nine systematic reviews. This exploratory study suggests that there might be a case for restricting searches for systematic reviews of diagnostic test accuracy studies to MEDLINE, EMBASE and the reference lists of included citations. The conduct of such reviews might be rendered more efficient by using this approach.

X Demographics

X Demographics

The data shown below were collected from the profiles of 39 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 53 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United Kingdom 2 4%
Canada 2 4%
United States 2 4%
Spain 1 2%
Unknown 46 87%

Demographic breakdown

Readers by professional status Count As %
Librarian 14 26%
Researcher 8 15%
Student > Master 6 11%
Student > Ph. D. Student 4 8%
Student > Bachelor 3 6%
Other 9 17%
Unknown 9 17%
Readers by discipline Count As %
Medicine and Dentistry 21 40%
Nursing and Health Professions 4 8%
Social Sciences 4 8%
Engineering 2 4%
Agricultural and Biological Sciences 1 2%
Other 8 15%
Unknown 13 25%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 31. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 19 July 2022.
All research outputs
#1,262,440
of 25,388,229 outputs
Outputs from Systematic Reviews
#176
of 2,227 outputs
Outputs of similar age
#15,136
of 273,980 outputs
Outputs of similar age from Systematic Reviews
#6
of 27 outputs
Altmetric has tracked 25,388,229 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 95th percentile: it's in the top 5% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 2,227 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 13.1. This one has done particularly well, scoring higher than 92% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 273,980 tracked outputs that were published within six weeks on either side of this one in any source. This one has done particularly well, scoring higher than 94% of its contemporaries.
We're also able to compare this research output to 27 others from the same source and published within six weeks on either side of this one. This one has done well, scoring higher than 81% of its contemporaries.