↓ Skip to main content

A clinical trials corpus annotated with UMLS entities to enhance the access to evidence-based medicine

Overview of attention for article published in BMC Medical Informatics and Decision Making, February 2021
Altmetric Badge

Mentioned by

twitter
3 X users

Citations

dimensions_citation
28 Dimensions

Readers on

mendeley
53 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
A clinical trials corpus annotated with UMLS entities to enhance the access to evidence-based medicine
Published in
BMC Medical Informatics and Decision Making, February 2021
DOI 10.1186/s12911-021-01395-z
Pubmed ID
Authors

Leonardo Campillos-Llanos, Ana Valverde-Mateos, Adrián Capllonch-Carrión, Antonio Moreno-Sandoval

Abstract

The large volume of medical literature makes it difficult for healthcare professionals to keep abreast of the latest studies that support Evidence-Based Medicine. Natural language processing enhances the access to relevant information, and gold standard corpora are required to improve systems. To contribute with a new dataset for this domain, we collected the Clinical Trials for Evidence-Based Medicine in Spanish (CT-EBM-SP) corpus. We annotated 1200 texts about clinical trials with entities from the Unified Medical Language System semantic groups: anatomy (ANAT), pharmacological and chemical substances (CHEM), pathologies (DISO), and lab tests, diagnostic or therapeutic procedures (PROC). We doubly annotated 10% of the corpus and measured inter-annotator agreement (IAA) using F-measure. As use case, we run medical entity recognition experiments with neural network models. This resource contains 500 abstracts of journal articles about clinical trials and 700 announcements of trial protocols (292 173 tokens). We annotated 46 699 entities (13.98% are nested entities). Regarding IAA agreement, we obtained an average F-measure of 85.65% (±4.79, strict match) and 93.94% (±3.31, relaxed match). In the use case experiments, we achieved recognition results ranging from 80.28% (±00.99) to 86.74% (±00.19) of average F-measure. Our results show that this resource is adequate for experiments with state-of-the-art approaches to biomedical named entity recognition. It is freely distributed at: http://www.lllf.uam.es/ESP/nlpmedterm_en.html . The methods are generalizable to other languages with similar available sources.

X Demographics

X Demographics

The data shown below were collected from the profiles of 3 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 53 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 53 100%

Demographic breakdown

Readers by professional status Count As %
Student > Master 8 15%
Researcher 6 11%
Student > Ph. D. Student 6 11%
Student > Bachelor 4 8%
Professor 3 6%
Other 8 15%
Unknown 18 34%
Readers by discipline Count As %
Computer Science 20 38%
Medicine and Dentistry 2 4%
Engineering 2 4%
Social Sciences 2 4%
Linguistics 1 2%
Other 3 6%
Unknown 23 43%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 24 February 2021.
All research outputs
#18,171,423
of 23,344,526 outputs
Outputs from BMC Medical Informatics and Decision Making
#1,528
of 2,022 outputs
Outputs of similar age
#299,814
of 418,566 outputs
Outputs of similar age from BMC Medical Informatics and Decision Making
#40
of 50 outputs
Altmetric has tracked 23,344,526 research outputs across all sources so far. This one is in the 19th percentile – i.e., 19% of other outputs scored the same or lower than it.
So far Altmetric has tracked 2,022 research outputs from this source. They receive a mean Attention Score of 5.0. This one is in the 20th percentile – i.e., 20% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 418,566 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 24th percentile – i.e., 24% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 50 others from the same source and published within six weeks on either side of this one. This one is in the 18th percentile – i.e., 18% of its contemporaries scored the same or lower than it.