↓ Skip to main content

Ambiguity and variability of database and software names in bioinformatics

Overview of attention for article published in Journal of Biomedical Semantics, June 2015
Altmetric Badge

About this Attention Score

  • Above-average Attention Score compared to outputs of the same age (55th percentile)
  • Good Attention Score compared to outputs of the same age and source (75th percentile)

Mentioned by

twitter
3 X users

Citations

dimensions_citation
10 Dimensions

Readers on

mendeley
28 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Ambiguity and variability of database and software names in bioinformatics
Published in
Journal of Biomedical Semantics, June 2015
DOI 10.1186/s13326-015-0026-0
Pubmed ID
Authors

Geraint Duck, Aleksandar Kovacevic, David L. Robertson, Robert Stevens, Goran Nenadic

Abstract

There are numerous options available to achieve various tasks in bioinformatics, but until recently, there were no tools that could systematically identify mentions of databases and tools within the literature. In this paper we explore the variability and ambiguity of database and software name mentions and compare dictionary and machine learning approaches to their identification. Through the development and analysis of a corpus of 60 full-text documents manually annotated at the mention level, we report high variability and ambiguity in database and software mentions. On a test set of 25 full-text documents, a baseline dictionary look-up achieved an F-score of 46 %, highlighting not only variability and ambiguity but also the extensive number of new resources introduced. A machine learning approach achieved an F-score of 63 % (with precision of 74 %) and 70 % (with precision of 83 %) for strict and lenient matching respectively. We characterise the issues with various mention types and propose potential ways of capturing additional database and software mentions in the literature. Our analyses show that identification of mentions of databases and tools is a challenging task that cannot be achieved by relying on current manually-curated resource repositories. Although machine learning shows improvement and promise (primarily in precision), more contextual information needs to be taken into account to achieve a good degree of accuracy.

X Demographics

X Demographics

The data shown below were collected from the profiles of 3 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 28 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United States 1 4%
Unknown 27 96%

Demographic breakdown

Readers by professional status Count As %
Student > Master 6 21%
Researcher 5 18%
Student > Bachelor 3 11%
Student > Ph. D. Student 3 11%
Lecturer 1 4%
Other 3 11%
Unknown 7 25%
Readers by discipline Count As %
Computer Science 7 25%
Agricultural and Biological Sciences 6 21%
Medicine and Dentistry 3 11%
Engineering 2 7%
Social Sciences 1 4%
Other 1 4%
Unknown 8 29%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 3. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 06 July 2015.
All research outputs
#12,929,609
of 22,815,414 outputs
Outputs from Journal of Biomedical Semantics
#179
of 364 outputs
Outputs of similar age
#116,199
of 263,394 outputs
Outputs of similar age from Journal of Biomedical Semantics
#2
of 8 outputs
Altmetric has tracked 22,815,414 research outputs across all sources so far. This one is in the 42nd percentile – i.e., 42% of other outputs scored the same or lower than it.
So far Altmetric has tracked 364 research outputs from this source. They receive a mean Attention Score of 4.6. This one is in the 49th percentile – i.e., 49% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 263,394 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 55% of its contemporaries.
We're also able to compare this research output to 8 others from the same source and published within six weeks on either side of this one. This one has scored higher than 6 of them.