↓ Skip to main content

A comparative evaluation of genome assembly reconciliation tools

Overview of attention for article published in Genome Biology, May 2017
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (90th percentile)
  • Above-average Attention Score compared to outputs of the same age and source (56th percentile)

Mentioned by

twitter
39 X users
facebook
1 Facebook page
wikipedia
3 Wikipedia pages

Citations

dimensions_citation
56 Dimensions

Readers on

mendeley
209 Mendeley
citeulike
3 CiteULike
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
A comparative evaluation of genome assembly reconciliation tools
Published in
Genome Biology, May 2017
DOI 10.1186/s13059-017-1213-3
Pubmed ID
Authors

Hind Alhakami, Hamid Mirebrahim, Stefano Lonardi

Abstract

The majority of eukaryotic genomes are unfinished due to the algorithmic challenges of assembling them. A variety of assembly and scaffolding tools are available, but it is not always obvious which tool or parameters to use for a specific genome size and complexity. It is, therefore, common practice to produce multiple assemblies using different assemblers and parameters, then select the best one for public release. A more compelling approach would allow one to merge multiple assemblies with the intent of producing a higher quality consensus assembly, which is the objective of assembly reconciliation. Several assembly reconciliation tools have been proposed in the literature, but their strengths and weaknesses have never been compared on a common dataset. We fill this need with this work, in which we report on an extensive comparative evaluation of several tools. Specifically, we evaluate contiguity, correctness, coverage, and the duplication ratio of the merged assembly compared to the individual assemblies provided as input. None of the tools we tested consistently improved the quality of the input GAGE and synthetic assemblies. Our experiments show an increase in contiguity in the consensus assembly when the original assemblies already have high quality. In terms of correctness, the quality of the results depends on the specific tool, as well as on the quality and the ranking of the input assemblies. In general, the number of misassemblies ranges from being comparable to the best of the input assembly to being comparable to the worst of the input assembly.

X Demographics

X Demographics

The data shown below were collected from the profiles of 39 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 209 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
France 1 <1%
Unknown 208 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 42 20%
Researcher 34 16%
Student > Master 26 12%
Student > Bachelor 25 12%
Student > Doctoral Student 9 4%
Other 28 13%
Unknown 45 22%
Readers by discipline Count As %
Agricultural and Biological Sciences 61 29%
Biochemistry, Genetics and Molecular Biology 60 29%
Computer Science 11 5%
Immunology and Microbiology 9 4%
Neuroscience 3 1%
Other 17 8%
Unknown 48 23%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 24. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 29 April 2023.
All research outputs
#1,626,611
of 25,808,886 outputs
Outputs from Genome Biology
#1,318
of 4,521 outputs
Outputs of similar age
#30,535
of 328,120 outputs
Outputs of similar age from Genome Biology
#28
of 64 outputs
Altmetric has tracked 25,808,886 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 93rd percentile: it's in the top 10% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 4,521 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 27.5. This one has gotten more attention than average, scoring higher than 70% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 328,120 tracked outputs that were published within six weeks on either side of this one in any source. This one has done particularly well, scoring higher than 90% of its contemporaries.
We're also able to compare this research output to 64 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 56% of its contemporaries.