↓ Skip to main content

Updated clinical guidelines experience major reporting limitations

Overview of attention for article published in Implementation Science, October 2017
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (90th percentile)

Mentioned by

twitter
43 tweeters

Readers on

mendeley
36 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Updated clinical guidelines experience major reporting limitations
Published in
Implementation Science, October 2017
DOI 10.1186/s13012-017-0651-3
Pubmed ID
Authors

Robin W.M. Vernooij, Laura Martínez García, Ivan Dario Florez, Laura Hildago Armas, Michiel H.F. Poorthuis, Melissa Brouwers, Pablo Alonso-Coello

Abstract

The Checklist for the Reporting of Updated Guidelines (CheckUp) was recently developed. However, so far, no systematic assessment of the reporting of updated clinical guidelines (CGs) exists. We aimed to examine (1) the completeness of reporting the updating process in CGs and (2) the inter-observer reliability of CheckUp. We conducted a systematic assessment of the reporting of the updating process in a sample of updated CGs using CheckUp. We performed a systematic search to identify updated CGs published in 2015, developed by a professional society, reporting a systematic review of the evidence, and containing at least one recommendation. Three reviewers independently assessed the CGs with CheckUp (16 items). We calculated the median score per item, per domain, and overall, converting scores to a 10-point scale. Multiple linear regression analyses were used to identify differences according to country, type of organisation, scope, and health topic of updated CGs. We calculated the intraclass coefficient (ICC) and 95% confidence interval (95% CI) for domains and overall score. We included in total 60 updated CGs. The median domain score on a 10-point scale for presentation was 5.8 (range 1.7 to 10), for editorial independence 8.3 (range 3.3 to 10), and for methodology 5.7 (range 0 to 10). The median overall score on a 10-point scale was 6.3 (range 3.1 to 10). Presentation and justification items at recommendation level (respectively reported by 27 and 38% of the CGs) and the methods used for the external review and implementing changes in practice were particularly poorly reported (both reported by 38% of the CGs). CGs developed by a European or international institution obtained a statistically significant higher overall score compared to North American or Asian institutions (p = 0.014). Finally, the agreement among the reviewers on the overall score was excellent (ICC 0.88, 95% CI 0.75 to 0.95). The reporting of updated CGs varies considerably with significant room for improvement. We recommend using CheckUp to assess the updating process in updated CGs and as a blueprint to inform methods and reporting strategies in updating.

Twitter Demographics

The data shown below were collected from the profiles of 43 tweeters who shared this research output. Click here to find out more about how the information was compiled.

Mendeley readers

The data shown below were compiled from readership statistics for 36 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 36 100%

Demographic breakdown

Readers by professional status Count As %
Researcher 6 17%
Student > Master 5 14%
Student > Doctoral Student 4 11%
Student > Bachelor 3 8%
Lecturer 3 8%
Other 11 31%
Unknown 4 11%
Readers by discipline Count As %
Medicine and Dentistry 12 33%
Nursing and Health Professions 3 8%
Pharmacology, Toxicology and Pharmaceutical Science 2 6%
Psychology 2 6%
Unspecified 1 3%
Other 7 19%
Unknown 9 25%

Attention Score in Context

This research output has an Altmetric Attention Score of 24. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 01 October 2019.
All research outputs
#1,214,424
of 20,927,597 outputs
Outputs from Implementation Science
#271
of 1,674 outputs
Outputs of similar age
#27,118
of 297,390 outputs
Outputs of similar age from Implementation Science
#2
of 2 outputs
Altmetric has tracked 20,927,597 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 94th percentile: it's in the top 10% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 1,674 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 14.6. This one has done well, scoring higher than 83% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 297,390 tracked outputs that were published within six weeks on either side of this one in any source. This one has done particularly well, scoring higher than 90% of its contemporaries.
We're also able to compare this research output to 2 others from the same source and published within six weeks on either side of this one.