↓ Skip to main content

Semantic annotation of consumer health questions

Overview of attention for article published in BMC Bioinformatics, February 2018
Altmetric Badge

Mentioned by

twitter
1 X user

Citations

dimensions_citation
26 Dimensions

Readers on

mendeley
47 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Semantic annotation of consumer health questions
Published in
BMC Bioinformatics, February 2018
DOI 10.1186/s12859-018-2045-1
Pubmed ID
Authors

Halil Kilicoglu, Asma Ben Abacha, Yassine Mrabet, Sonya E. Shooshan, Laritza Rodriguez, Kate Masterton, Dina Demner-Fushman

Abstract

Consumers increasingly use online resources for their health information needs. While current search engines can address these needs to some extent, they generally do not take into account that most health information needs are complex and can only fully be expressed in natural language. Consumer health question answering (QA) systems aim to fill this gap. A major challenge in developing consumer health QA systems is extracting relevant semantic content from the natural language questions (question understanding). To develop effective question understanding tools, question corpora semantically annotated for relevant question elements are needed. In this paper, we present a two-part consumer health question corpus annotated with several semantic categories: named entities, question triggers/types, question frames, and question topic. The first part (CHQA-email) consists of relatively long email requests received by the U.S. National Library of Medicine (NLM) customer service, while the second part (CHQA-web) consists of shorter questions posed to MedlinePlus search engine as queries. Each question has been annotated by two annotators. The annotation methodology is largely the same between the two parts of the corpus; however, we also explain and justify the differences between them. Additionally, we provide information about corpus characteristics, inter-annotator agreement, and our attempts to measure annotation confidence in the absence of adjudication of annotations. The resulting corpus consists of 2614 questions (CHQA-email: 1740, CHQA-web: 874). Problems are the most frequent named entities, while treatment and general information questions are the most common question types. Inter-annotator agreement was generally modest: question types and topics yielded highest agreement, while the agreement for more complex frame annotations was lower. Agreement in CHQA-web was consistently higher than that in CHQA-email. Pairwise inter-annotator agreement proved most useful in estimating annotation confidence. To our knowledge, our corpus is the first focusing on annotation of uncurated consumer health questions. It is currently used to develop machine learning-based methods for question understanding. We make the corpus publicly available to stimulate further research on consumer health QA.

X Demographics

X Demographics

The data shown below were collected from the profile of 1 X user who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 47 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 47 100%

Demographic breakdown

Readers by professional status Count As %
Researcher 10 21%
Student > Ph. D. Student 6 13%
Student > Master 6 13%
Student > Doctoral Student 3 6%
Other 3 6%
Other 7 15%
Unknown 12 26%
Readers by discipline Count As %
Computer Science 14 30%
Medicine and Dentistry 6 13%
Biochemistry, Genetics and Molecular Biology 2 4%
Arts and Humanities 2 4%
Linguistics 1 2%
Other 7 15%
Unknown 15 32%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 08 February 2018.
All research outputs
#18,836,331
of 23,344,526 outputs
Outputs from BMC Bioinformatics
#6,430
of 7,387 outputs
Outputs of similar age
#329,682
of 439,064 outputs
Outputs of similar age from BMC Bioinformatics
#94
of 117 outputs
Altmetric has tracked 23,344,526 research outputs across all sources so far. This one is in the 11th percentile – i.e., 11% of other outputs scored the same or lower than it.
So far Altmetric has tracked 7,387 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 5.5. This one is in the 5th percentile – i.e., 5% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 439,064 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 14th percentile – i.e., 14% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 117 others from the same source and published within six weeks on either side of this one. This one is in the 15th percentile – i.e., 15% of its contemporaries scored the same or lower than it.