↓ Skip to main content

How is AMSTAR applied by authors – a call for better reporting

Overview of attention for article published in BMC Medical Research Methodology, June 2018
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (84th percentile)
  • High Attention Score compared to outputs of the same age and source (86th percentile)

Mentioned by

blogs
1 blog
policy
1 policy source
twitter
6 X users

Citations

dimensions_citation
54 Dimensions

Readers on

mendeley
125 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
How is AMSTAR applied by authors – a call for better reporting
Published in
BMC Medical Research Methodology, June 2018
DOI 10.1186/s12874-018-0520-z
Pubmed ID
Authors

Dawid Pieper, Nadja Koensgen, Jessica Breuing, Long Ge, Uta Wegewitz

Abstract

The assessment of multiple systematic reviews (AMSTAR) tool is widely used for investigating the methodological quality of systematic reviews (SR). Originally, AMSTAR was developed for SRs of randomized controlled trials (RCTs). Its applicability to SRs of other study designs remains unclear. Our objectives were to: 1) analyze how AMSTAR is applied by authors and (2) analyze whether the authors pay attention to the original purpose of AMSTAR and for what it has been validated. We searched MEDLINE (via PubMed) from inception through October 2016 to identify studies that applied AMSTAR. Full-text studies were sought for all retrieved hits and screened by one reviewer. A second reviewer verified the excluded studies (liberal acceleration). Data were extracted into structured tables by one reviewer and were checked by a second reviewer. Discrepancies at any stage were resolved by consensus or by consulting a third person. We analyzed the data descriptively as frequencies or medians and interquartile ranges (IQRs). Associations were quantified using the risk ratio (RR), with 95% confidence intervals. We identified 247 studies. They included a median of 17 reviews (interquartile range (IQR): 8 to 47) per study. AMSTAR was modified in 23% (57/247) of studies. In most studies, an AMSTAR score was calculated (200/247; 81%). Methods for calculating an AMSTAR score varied, with summing up all yes answers (yes = 1) being the most frequent option (102/200; 51%). More than one third of the authors failed to report how the AMSTAR score was obtained (71/200; 36%). In a subgroup analysis, we compared overviews of reviews (n = 154) with the methodological publications (n = 93). The overviews of reviews were much less likely to mention both limitations with respect to study designs (if other studies other than RCTs were included in the reviews) (RR 0.27, 95% CI 0.09 to 0.75) and overall score (RR 0.08, 95% CI 0.02 to 0.35). Authors, peer reviewers, and editors should pay more attention to the correct use and reporting of assessment tools in evidence synthesis. Authors of overviews of reviews should ensure to have a methodological expert in their review team.

X Demographics

X Demographics

The data shown below were collected from the profiles of 6 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 125 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 125 100%

Demographic breakdown

Readers by professional status Count As %
Student > Master 26 21%
Researcher 14 11%
Student > Ph. D. Student 13 10%
Student > Bachelor 9 7%
Student > Doctoral Student 6 5%
Other 21 17%
Unknown 36 29%
Readers by discipline Count As %
Medicine and Dentistry 21 17%
Psychology 20 16%
Nursing and Health Professions 14 11%
Social Sciences 11 9%
Economics, Econometrics and Finance 4 3%
Other 16 13%
Unknown 39 31%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 13. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 29 September 2022.
All research outputs
#2,641,513
of 24,917,903 outputs
Outputs from BMC Medical Research Methodology
#396
of 2,223 outputs
Outputs of similar age
#52,458
of 334,415 outputs
Outputs of similar age from BMC Medical Research Methodology
#7
of 45 outputs
Altmetric has tracked 24,917,903 research outputs across all sources so far. Compared to these this one has done well and is in the 89th percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 2,223 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 10.4. This one has done well, scoring higher than 82% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 334,415 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 84% of its contemporaries.
We're also able to compare this research output to 45 others from the same source and published within six weeks on either side of this one. This one has done well, scoring higher than 86% of its contemporaries.