↓ Skip to main content

Bio-SimVerb and Bio-SimLex: wide-coverage evaluation sets of word similarity in biomedicine

Overview of attention for article published in BMC Bioinformatics, February 2018
Altmetric Badge

About this Attention Score

  • Above-average Attention Score compared to outputs of the same age (53rd percentile)
  • Above-average Attention Score compared to outputs of the same age and source (55th percentile)

Mentioned by

twitter
5 X users

Readers on

mendeley
27 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Bio-SimVerb and Bio-SimLex: wide-coverage evaluation sets of word similarity in biomedicine
Published in
BMC Bioinformatics, February 2018
DOI 10.1186/s12859-018-2039-z
Pubmed ID
Authors

Billy Chiu, Sampo Pyysalo, Ivan Vulić, Anna Korhonen

Abstract

Word representations support a variety of Natural Language Processing (NLP) tasks. The quality of these representations is typically assessed by comparing the distances in the induced vector spaces against human similarity judgements. Whereas comprehensive evaluation resources have recently been developed for the general domain, similar resources for biomedicine currently suffer from the lack of coverage, both in terms of word types included and with respect to the semantic distinctions. Notably, verbs have been excluded, although they are essential for the interpretation of biomedical language. Further, current resources do not discern between semantic similarity and semantic relatedness, although this has been proven as an important predictor of the usefulness of word representations and their performance in downstream applications. We present two novel comprehensive resources targeting the evaluation of word representations in biomedicine. These resources, Bio-SimVerb and Bio-SimLex, address the previously mentioned problems, and can be used for evaluations of verb and noun representations respectively. In our experiments, we have computed the Pearson's correlation between performances on intrinsic and extrinsic tasks using twelve popular state-of-the-art representation models (e.g. word2vec models). The intrinsic-extrinsic correlations using our datasets are notably higher than with previous intrinsic evaluation benchmarks such as UMNSRS and MayoSRS. In addition, when evaluating representation models for their abilities to capture verb and noun semantics individually, we show a considerable variation between performances across all models. Bio-SimVerb and Bio-SimLex enable intrinsic evaluation of word representations. This evaluation can serve as a predictor of performance on various downstream tasks in the biomedical domain. The results on Bio-SimVerb and Bio-SimLex using standard word representation models highlight the importance of developing dedicated evaluation resources for NLP in biomedicine for particular word classes (e.g. verbs). These are needed to identify the most accurate methods for learning class-specific representations. Bio-SimVerb and Bio-SimLex are publicly available.

X Demographics

X Demographics

The data shown below were collected from the profiles of 5 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 27 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 27 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 6 22%
Researcher 4 15%
Student > Doctoral Student 3 11%
Student > Master 3 11%
Other 2 7%
Other 2 7%
Unknown 7 26%
Readers by discipline Count As %
Computer Science 9 33%
Biochemistry, Genetics and Molecular Biology 2 7%
Agricultural and Biological Sciences 2 7%
Nursing and Health Professions 1 4%
Sports and Recreations 1 4%
Other 1 4%
Unknown 11 41%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 3. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 31 August 2018.
All research outputs
#13,045,234
of 23,344,526 outputs
Outputs from BMC Bioinformatics
#3,691
of 7,387 outputs
Outputs of similar age
#202,537
of 439,066 outputs
Outputs of similar age from BMC Bioinformatics
#52
of 118 outputs
Altmetric has tracked 23,344,526 research outputs across all sources so far. This one is in the 43rd percentile – i.e., 43% of other outputs scored the same or lower than it.
So far Altmetric has tracked 7,387 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 5.5. This one is in the 48th percentile – i.e., 48% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 439,066 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 53% of its contemporaries.
We're also able to compare this research output to 118 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 55% of its contemporaries.