↓ Skip to main content

Annotation-based feature extraction from sets of SBML models

Overview of attention for article published in Journal of Biomedical Semantics, April 2015
Altmetric Badge

About this Attention Score

  • Above-average Attention Score compared to outputs of the same age (56th percentile)
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

wikipedia
1 Wikipedia page

Citations

dimensions_citation
14 Dimensions

Readers on

mendeley
24 Mendeley
citeulike
1 CiteULike
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Annotation-based feature extraction from sets of SBML models
Published in
Journal of Biomedical Semantics, April 2015
DOI 10.1186/s13326-015-0014-4
Pubmed ID
Authors

Rebekka Alm, Dagmar Waltemath, Markus Wolfien, Olaf Wolkenhauer, Ron Henkel

Abstract

Model repositories such as BioModels Database provide computational models of biological systems for the scientific community. These models contain rich semantic annotations that link model entities to concepts in well-established bio-ontologies such as Gene Ontology. Consequently, thematically similar models are likely to share similar annotations. Based on this assumption, we argue that semantic annotations are a suitable tool to characterize sets of models. These characteristics improve model classification, allow to identify additional features for model retrieval tasks, and enable the comparison of sets of models. In this paper we discuss four methods for annotation-based feature extraction from model sets. We tested all methods on sets of models in SBML format which were composed from BioModels Database. To characterize each of these sets, we analyzed and extracted concepts from three frequently used ontologies, namely Gene Ontology, ChEBI and SBO. We find that three out of the methods are suitable to determine characteristic features for arbitrary sets of models: The selected features vary depending on the underlying model set, and they are also specific to the chosen model set. We show that the identified features map on concepts that are higher up in the hierarchy of the ontologies than the concepts used for model annotations. Our analysis also reveals that the information content of concepts in ontologies and their usage for model annotation do not correlate. Annotation-based feature extraction enables the comparison of model sets, as opposed to existing methods for model-to-keyword comparison, or model-to-model comparison.

Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 24 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 24 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 6 25%
Researcher 5 21%
Professor 2 8%
Librarian 1 4%
Other 1 4%
Other 3 13%
Unknown 6 25%
Readers by discipline Count As %
Computer Science 5 21%
Agricultural and Biological Sciences 3 13%
Biochemistry, Genetics and Molecular Biology 2 8%
Social Sciences 2 8%
Medicine and Dentistry 2 8%
Other 2 8%
Unknown 8 33%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 3. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 21 August 2015.
All research outputs
#7,457,701
of 22,800,560 outputs
Outputs from Journal of Biomedical Semantics
#145
of 364 outputs
Outputs of similar age
#90,528
of 264,074 outputs
Outputs of similar age from Journal of Biomedical Semantics
#8
of 14 outputs
Altmetric has tracked 22,800,560 research outputs across all sources so far. This one is in the 44th percentile – i.e., 44% of other outputs scored the same or lower than it.
So far Altmetric has tracked 364 research outputs from this source. They receive a mean Attention Score of 4.6. This one has gotten more attention than average, scoring higher than 53% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 264,074 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 56% of its contemporaries.
We're also able to compare this research output to 14 others from the same source and published within six weeks on either side of this one. This one is in the 42nd percentile – i.e., 42% of its contemporaries scored the same or lower than it.