↓ Skip to main content

Models and data of AMPlify: a deep learning tool for antimicrobial peptide prediction

Overview of attention for article published in BMC Research Notes, February 2023
Altmetric Badge

About this Attention Score

  • Above-average Attention Score compared to outputs of the same age (59th percentile)
  • High Attention Score compared to outputs of the same age and source (80th percentile)

Mentioned by

twitter
4 X users

Citations

dimensions_citation
4 Dimensions

Readers on

mendeley
19 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Models and data of AMPlify: a deep learning tool for antimicrobial peptide prediction
Published in
BMC Research Notes, February 2023
DOI 10.1186/s13104-023-06279-1
Pubmed ID
Authors

Chenkai Li, René L. Warren, Inanc Birol

Abstract

Antibiotic resistance is a rising global threat to human health and is prompting researchers to seek effective alternatives to conventional antibiotics, which include antimicrobial peptides (AMPs). Recently, we have reported AMPlify, an attentive deep learning model for predicting AMPs in databases of peptide sequences. In our tests, AMPlify outperformed the state-of-the-art. We have illustrated its use on data describing the American bullfrog (Rana [Lithobates] catesbeiana) genome. Here we present the model files and training/test data sets we used in that study. The original model (the balanced model) was trained on a balanced set of AMP and non-AMP sequences curated from public databases. In this data note, we additionally provide a model trained on an imbalanced set, in which non-AMP sequences far outnumber AMP sequences. We note that the balanced and imbalanced models would serve different use cases, and both would serve the research community, facilitating the discovery and development of novel AMPs. This data note provides two sets of models, as well as two AMP and four non-AMP sequence sets for training and testing the balanced and imbalanced models. Each model set includes five single sub-models that form an ensemble model. The first model set corresponds to the original model trained on a balanced training set that has been described in the original AMPlify manuscript, while the second model set was trained on an imbalanced training set.

X Demographics

X Demographics

The data shown below were collected from the profiles of 4 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 19 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 19 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 4 21%
Lecturer 2 11%
Other 1 5%
Student > Bachelor 1 5%
Student > Doctoral Student 1 5%
Other 2 11%
Unknown 8 42%
Readers by discipline Count As %
Biochemistry, Genetics and Molecular Biology 6 32%
Agricultural and Biological Sciences 2 11%
Computer Science 2 11%
Pharmacology, Toxicology and Pharmaceutical Science 1 5%
Unknown 8 42%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 3. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 03 February 2023.
All research outputs
#14,650,584
of 25,443,857 outputs
Outputs from BMC Research Notes
#1,723
of 4,516 outputs
Outputs of similar age
#185,791
of 472,498 outputs
Outputs of similar age from BMC Research Notes
#5
of 21 outputs
Altmetric has tracked 25,443,857 research outputs across all sources so far. This one is in the 41st percentile – i.e., 41% of other outputs scored the same or lower than it.
So far Altmetric has tracked 4,516 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 6.2. This one has gotten more attention than average, scoring higher than 61% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 472,498 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 59% of its contemporaries.
We're also able to compare this research output to 21 others from the same source and published within six weeks on either side of this one. This one has done well, scoring higher than 80% of its contemporaries.