↓ Skip to main content

An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition

Overview of attention for article published in BMC Bioinformatics, April 2015
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (92nd percentile)
  • High Attention Score compared to outputs of the same age and source (96th percentile)

Mentioned by

blogs
1 blog
twitter
13 X users
patent
1 patent
facebook
2 Facebook pages
wikipedia
1 Wikipedia page

Citations

dimensions_citation
268 Dimensions

Readers on

mendeley
189 Mendeley
citeulike
1 CiteULike
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition
Published in
BMC Bioinformatics, April 2015
DOI 10.1186/s12859-015-0564-6
Pubmed ID
Authors

George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias Zschunke, Michael R Alvers, Dirk Weissenborn, Anastasia Krithara, Sergios Petridis, Dimitris Polychronopoulos, Yannis Almirantis, John Pavlopoulos, Nicolas Baskiotis, Patrick Gallinari, Thierry Artiéres, Axel-Cyrille Ngonga Ngomo, Norman Heino, Eric Gaussier, Liliana Barrio-Alvers, Michael Schroeder, Ion Androutsopoulos, Georgios Paliouras

Abstract

This article provides an overview of the first BIOASQ challenge, a competition on large-scale biomedical semantic indexing and question answering (QA), which took place between March and September 2013. BIOASQ assesses the ability of systems to semantically index very large numbers of biomedical scientific articles, and to return concise and user-understandable answers to given natural language questions by combining information from biomedical articles and ontologies. The 2013 BIOASQ competition comprised two tasks, Task 1a and Task 1b. In Task 1a participants were asked to automatically annotate new PUBMED documents with MESH headings. Twelve teams participated in Task 1a, with a total of 46 system runs submitted, and one of the teams performing consistently better than the MTI indexer used by NLM to suggest MESH headings to curators. Task 1b used benchmark datasets containing 29 development and 282 test English questions, along with gold standard (reference) answers, prepared by a team of biomedical experts from around Europe and participants had to automatically produce answers. Three teams participated in Task 1b, with 11 system runs. The BIOASQ infrastructure, including benchmark datasets, evaluation mechanisms, and the results of the participants and baseline methods, is publicly available. A publicly available evaluation infrastructure for biomedical semantic indexing and QA has been developed, which includes benchmark datasets, and can be used to evaluate systems that: assign MESH headings to published articles or to English questions; retrieve relevant RDF triples from ontologies, relevant articles and snippets from PUBMED Central; produce "exact" and paragraph-sized "ideal" answers (summaries). The results of the systems that participated in the 2013 BIOASQ competition are promising. In Task 1a one of the systems performed consistently better from the NLM's MTI indexer. In Task 1b the systems received high scores in the manual evaluation of the "ideal" answers; hence, they produced high quality summaries as answers. Overall, BIOASQ helped obtain a unified view of how techniques from text classification, semantic indexing, document and passage retrieval, question answering, and text summarization can be combined to allow biomedical experts to obtain concise, user-understandable answers to questions reflecting their real information needs.

X Demographics

X Demographics

The data shown below were collected from the profiles of 13 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 189 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United States 2 1%
Portugal 1 <1%
Switzerland 1 <1%
Korea, Republic of 1 <1%
United Kingdom 1 <1%
Greece 1 <1%
Spain 1 <1%
Unknown 181 96%

Demographic breakdown

Readers by professional status Count As %
Researcher 38 20%
Student > Ph. D. Student 37 20%
Student > Master 22 12%
Student > Bachelor 12 6%
Student > Postgraduate 8 4%
Other 25 13%
Unknown 47 25%
Readers by discipline Count As %
Computer Science 91 48%
Engineering 9 5%
Medicine and Dentistry 5 3%
Agricultural and Biological Sciences 5 3%
Biochemistry, Genetics and Molecular Biology 4 2%
Other 16 8%
Unknown 59 31%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 23. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 06 December 2021.
All research outputs
#1,408,423
of 22,800,560 outputs
Outputs from BMC Bioinformatics
#243
of 7,281 outputs
Outputs of similar age
#19,281
of 263,976 outputs
Outputs of similar age from BMC Bioinformatics
#5
of 135 outputs
Altmetric has tracked 22,800,560 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 93rd percentile: it's in the top 10% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 7,281 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 5.4. This one has done particularly well, scoring higher than 96% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 263,976 tracked outputs that were published within six weeks on either side of this one in any source. This one has done particularly well, scoring higher than 92% of its contemporaries.
We're also able to compare this research output to 135 others from the same source and published within six weeks on either side of this one. This one has done particularly well, scoring higher than 96% of its contemporaries.