↓ Skip to main content

Automating data extraction in systematic reviews: a systematic review

Overview of attention for article published in Systematic Reviews, June 2015
Altmetric Badge

About this Attention Score

  • In the top 5% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (95th percentile)
  • High Attention Score compared to outputs of the same age and source (87th percentile)

Mentioned by

twitter
62 X users

Citations

dimensions_citation
163 Dimensions

Readers on

mendeley
391 Mendeley
citeulike
3 CiteULike
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Automating data extraction in systematic reviews: a systematic review
Published in
Systematic Reviews, June 2015
DOI 10.1186/s13643-015-0066-7
Pubmed ID
Authors

Siddhartha R. Jonnalagadda, Pawan Goyal, Mark D. Huffman

Abstract

Automation of the parts of systematic review process, specifically the data extraction step, may be an important strategy to reduce the time necessary to complete a systematic review. However, the state of the science of automatically extracting data elements from full texts has not been well described. This paper performs a systematic review of published and unpublished methods to automate data extraction for systematic reviews. We systematically searched PubMed, IEEEXplore, and ACM Digital Library to identify potentially relevant articles. We included reports that met the following criteria: 1) methods or results section described what entities were or need to be extracted, and 2) at least one entity was automatically extracted with evaluation results that were presented for that entity. We also reviewed the citations from included reports. Out of a total of 1190 unique citations that met our search criteria, we found 26 published reports describing automatic extraction of at least one of more than 52 potential data elements used in systematic reviews. For 25 (48 %) of the data elements used in systematic reviews, there were attempts from various researchers to extract information automatically from the publication text. Out of these, 14 (27 %) data elements were completely extracted, but the highest number of data elements extracted automatically by a single study was 7. Most of the data elements were extracted with F-scores (a mean of sensitivity and positive predictive value) of over 70 %. We found no unified information extraction framework tailored to the systematic review process, and published reports focused on a limited (1-7) number of data elements. Biomedical natural language processing techniques have not been fully utilized to fully or even partially automate the data extraction step of systematic reviews.

X Demographics

X Demographics

The data shown below were collected from the profiles of 62 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 391 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Netherlands 1 <1%
Italy 1 <1%
Finland 1 <1%
United Kingdom 1 <1%
Canada 1 <1%
United States 1 <1%
Unknown 385 98%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 60 15%
Student > Master 57 15%
Researcher 40 10%
Librarian 21 5%
Student > Bachelor 21 5%
Other 83 21%
Unknown 109 28%
Readers by discipline Count As %
Computer Science 64 16%
Medicine and Dentistry 59 15%
Nursing and Health Professions 25 6%
Agricultural and Biological Sciences 18 5%
Business, Management and Accounting 16 4%
Other 86 22%
Unknown 123 31%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 37. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 29 January 2022.
All research outputs
#1,024,315
of 24,242,692 outputs
Outputs from Systematic Reviews
#139
of 2,107 outputs
Outputs of similar age
#12,727
of 268,142 outputs
Outputs of similar age from Systematic Reviews
#5
of 32 outputs
Altmetric has tracked 24,242,692 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 95th percentile: it's in the top 5% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 2,107 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 13.1. This one has done particularly well, scoring higher than 93% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 268,142 tracked outputs that were published within six weeks on either side of this one in any source. This one has done particularly well, scoring higher than 95% of its contemporaries.
We're also able to compare this research output to 32 others from the same source and published within six weeks on either side of this one. This one has done well, scoring higher than 87% of its contemporaries.