↓ Skip to main content

Latent Dirichlet Allocation in predicting clinical trial terminations

Overview of attention for article published in BMC Medical Informatics and Decision Making, November 2019
Altmetric Badge

Mentioned by

twitter
1 X user

Citations

dimensions_citation
16 Dimensions

Readers on

mendeley
45 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Latent Dirichlet Allocation in predicting clinical trial terminations
Published in
BMC Medical Informatics and Decision Making, November 2019
DOI 10.1186/s12911-019-0973-y
Pubmed ID
Authors

Simon Geletta, Lendie Follett, Marcia Laugerman

Abstract

This study used natural language processing (NLP) and machine learning (ML) techniques to identify reliable patterns from within research narrative documents to distinguish studies that complete successfully, from the ones that terminate. Recent research findings have reported that at least 10 % of all studies that are funded by major research funding agencies terminate without yielding useful results. Since it is well-known that scientific studies that receive funding from major funding agencies are carefully planned, and rigorously vetted through the peer-review process, it was somewhat daunting to us that study-terminations are this prevalent. Moreover, our review of the literature about study terminations suggested that the reasons for study terminations are not well understood. We therefore aimed to address that knowledge gap, by seeking to identify the factors that contribute to study failures. We used data from the clinicialTrials.gov repository, from which we extracted both structured data (study characteristics), and unstructured data (the narrative description of the studies). We applied natural language processing techniques to the unstructured data to quantify the risk of termination by identifying distinctive topics that are more frequently associated with trials that are terminated and trials that are completed. We used the Latent Dirichlet Allocation (LDA) technique to derive 25 "topics" with corresponding sets of probabilities, which we then used to predict study-termination by utilizing random forest modeling. We fit two distinct models - one using only structured data as predictors and another model with both structured data and the 25 text topics derived from the unstructured data. In this paper, we demonstrate the interpretive and predictive value of LDA as it relates to predicting clinical trial failure. The results also demonstrate that the combined modeling approach yields robust predictive probabilities in terms of both sensitivity and specificity, relative to a model that utilizes the structured data alone. Our study demonstrated that the use of topic modeling using LDA significantly raises the utility of unstructured data in better predicating the completion vs. termination of studies. This study sets the direction for future research to evaluate the viability of the designs of health studies.

X Demographics

X Demographics

The data shown below were collected from the profile of 1 X user who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 45 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 45 100%

Demographic breakdown

Readers by professional status Count As %
Researcher 8 18%
Student > Master 5 11%
Student > Doctoral Student 4 9%
Student > Ph. D. Student 4 9%
Other 2 4%
Other 6 13%
Unknown 16 36%
Readers by discipline Count As %
Computer Science 11 24%
Agricultural and Biological Sciences 3 7%
Medicine and Dentistry 3 7%
Social Sciences 2 4%
Psychology 2 4%
Other 4 9%
Unknown 20 44%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 29 November 2019.
All research outputs
#20,592,137
of 23,177,498 outputs
Outputs from BMC Medical Informatics and Decision Making
#1,827
of 2,016 outputs
Outputs of similar age
#384,556
of 459,228 outputs
Outputs of similar age from BMC Medical Informatics and Decision Making
#58
of 76 outputs
Altmetric has tracked 23,177,498 research outputs across all sources so far. This one is in the 1st percentile – i.e., 1% of other outputs scored the same or lower than it.
So far Altmetric has tracked 2,016 research outputs from this source. They receive a mean Attention Score of 4.9. This one is in the 1st percentile – i.e., 1% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 459,228 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 1st percentile – i.e., 1% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 76 others from the same source and published within six weeks on either side of this one. This one is in the 1st percentile – i.e., 1% of its contemporaries scored the same or lower than it.