↓ Skip to main content

Supporting systematic reviews using LDA-based document representations

Overview of attention for article published in Systematic Reviews, November 2015
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (81st percentile)
  • Good Attention Score compared to outputs of the same age and source (72nd percentile)

Mentioned by

twitter
12 X users

Citations

dimensions_citation
55 Dimensions

Readers on

mendeley
135 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Supporting systematic reviews using LDA-based document representations
Published in
Systematic Reviews, November 2015
DOI 10.1186/s13643-015-0117-0
Pubmed ID
Authors

Yuanhan Mo, Georgios Kontonatsios, Sophia Ananiadou

Abstract

Identifying relevant studies for inclusion in a systematic review (i.e. screening) is a complex, laborious and expensive task. Recently, a number of studies has shown that the use of machine learning and text mining methods to automatically identify relevant studies has the potential to drastically decrease the workload involved in the screening phase. The vast majority of these machine learning methods exploit the same underlying principle, i.e. a study is modelled as a bag-of-words (BOW). We explore the use of topic modelling methods to derive a more informative representation of studies. We apply Latent Dirichlet allocation (LDA), an unsupervised topic modelling approach, to automatically identify topics in a collection of studies. We then represent each study as a distribution of LDA topics. Additionally, we enrich topics derived using LDA with multi-word terms identified by using an automatic term recognition (ATR) tool. For evaluation purposes, we carry out automatic identification of relevant studies using support vector machine (SVM)-based classifiers that employ both our novel topic-based representation and the BOW representation. Our results show that the SVM classifier is able to identify a greater number of relevant studies when using the LDA representation than the BOW representation. These observations hold for two systematic reviews of the clinical domain and three reviews of the social science domain. A topic-based feature representation of documents outperforms the BOW representation when applied to the task of automatic citation screening. The proposed term-enriched topics are more informative and less ambiguous to systematic reviewers.

X Demographics

X Demographics

The data shown below were collected from the profiles of 12 X users who shared this research output. Click here to find out more about how the information was compiled.
As of 1 July 2024, you may notice a temporary increase in the numbers of X profiles with Unknown location. Click here to learn more.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 135 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United Kingdom 1 <1%
Canada 1 <1%
Unknown 133 99%

Demographic breakdown

Readers by professional status Count As %
Student > Master 26 19%
Student > Ph. D. Student 22 16%
Researcher 15 11%
Librarian 7 5%
Student > Doctoral Student 7 5%
Other 27 20%
Unknown 31 23%
Readers by discipline Count As %
Computer Science 44 33%
Medicine and Dentistry 20 15%
Agricultural and Biological Sciences 6 4%
Social Sciences 6 4%
Engineering 6 4%
Other 17 13%
Unknown 36 27%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 8. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 20 March 2016.
All research outputs
#5,006,513
of 26,362,953 outputs
Outputs from Systematic Reviews
#976
of 2,290 outputs
Outputs of similar age
#72,546
of 397,211 outputs
Outputs of similar age from Systematic Reviews
#12
of 44 outputs
Altmetric has tracked 26,362,953 research outputs across all sources so far. Compared to these this one has done well and is in the 80th percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 2,290 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 13.0. This one has gotten more attention than average, scoring higher than 57% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 397,211 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 81% of its contemporaries.
We're also able to compare this research output to 44 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 72% of its contemporaries.