↓ Skip to main content

Applying GRADE-CERQual to qualitative evidence synthesis findings–paper 7: understanding the potential impacts of dissemination bias

Overview of attention for article published in Implementation Science, January 2018
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (87th percentile)
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

news
1 news outlet
policy
1 policy source
twitter
4 X users

Citations

dimensions_citation
53 Dimensions

Readers on

mendeley
202 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Applying GRADE-CERQual to qualitative evidence synthesis findings–paper 7: understanding the potential impacts of dissemination bias
Published in
Implementation Science, January 2018
DOI 10.1186/s13012-017-0694-5
Pubmed ID
Authors

Andrew Booth, Simon Lewin, Claire Glenton, Heather Munthe-Kaas, Ingrid Toews, Jane Noyes, Arash Rashidian, Rigmor C. Berg, Brenda Nyakang’o, Joerg J. Meerpohl, GRADE-CERQual Coordinating Team

Abstract

The GRADE-CERQual (Confidence in Evidence from Reviews of Qualitative research) approach has been developed by the GRADE (Grading of Recommendations Assessment, Development and Evaluation) Working Group. The approach has been developed to support the use of findings from qualitative evidence syntheses in decision-making, including guideline development and policy formulation. CERQual includes four components for assessing how much confidence to place in findings from reviews of qualitative research (also referred to as qualitative evidence syntheses): (1) methodological limitations, (2) coherence, (3) adequacy of data and (4) relevance. This paper is part of a series providing guidance on how to apply CERQual and focuses on a probable fifth component, dissemination bias. Given its exploratory nature, we are not yet able to provide guidance on applying this potential component of the CERQual approach. Instead, we focus on how dissemination bias might be conceptualised in the context of qualitative research and the potential impact dissemination bias might have on an overall assessment of confidence in a review finding. We also set out a proposed research agenda in this area. We developed this paper by gathering feedback from relevant research communities, searching MEDLINE and Web of Science to identify and characterise the existing literature discussing or assessing dissemination bias in qualitative research and its wider implications, developing consensus through project group meetings, and conducting an online survey of the extent, awareness and perceptions of dissemination bias in qualitative research. We have defined dissemination bias in qualitative research as a systematic distortion of the phenomenon of interest due to selective dissemination of studies or individual study findings. Dissemination bias is important for qualitative evidence syntheses as the selective dissemination of qualitative studies and/or study findings may distort our understanding of the phenomena that these syntheses aim to explore and thereby undermine our confidence in these findings. Dissemination bias has been extensively examined in the context of randomised controlled trials and systematic reviews of such studies. The effects of potential dissemination bias are formally considered, as publication bias, within the GRADE approach. However, the issue has received almost no attention in the context of qualitative research. Because of very limited understanding of dissemination bias and its potential impact on review findings in the context of qualitative evidence syntheses, this component is currently not included in the GRADE-CERQual approach. Further research is needed to establish the extent and impacts of dissemination bias in qualitative research and the extent to which dissemination bias needs to be taken into account when we assess how much confidence we have in findings from qualitative evidence syntheses.

X Demographics

X Demographics

The data shown below were collected from the profiles of 4 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 202 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 202 100%

Demographic breakdown

Readers by professional status Count As %
Researcher 37 18%
Student > Ph. D. Student 32 16%
Student > Master 26 13%
Other 12 6%
Student > Postgraduate 10 5%
Other 48 24%
Unknown 37 18%
Readers by discipline Count As %
Medicine and Dentistry 51 25%
Social Sciences 30 15%
Nursing and Health Professions 24 12%
Psychology 12 6%
Business, Management and Accounting 7 3%
Other 30 15%
Unknown 48 24%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 14. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 16 April 2021.
All research outputs
#2,433,964
of 25,032,929 outputs
Outputs from Implementation Science
#499
of 1,795 outputs
Outputs of similar age
#55,064
of 452,677 outputs
Outputs of similar age from Implementation Science
#23
of 43 outputs
Altmetric has tracked 25,032,929 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 90th percentile: it's in the top 10% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 1,795 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 14.9. This one has gotten more attention than average, scoring higher than 72% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 452,677 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 87% of its contemporaries.
We're also able to compare this research output to 43 others from the same source and published within six weeks on either side of this one. This one is in the 48th percentile – i.e., 48% of its contemporaries scored the same or lower than it.