↓ Skip to main content

Applying GRADE-CERQual to qualitative evidence synthesis findings—paper 3: how to assess methodological limitations

Overview of attention for article published in Implementation Science, January 2018
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • Good Attention Score compared to outputs of the same age (76th percentile)

Mentioned by

policy
1 policy source
twitter
9 X users

Citations

dimensions_citation
163 Dimensions

Readers on

mendeley
242 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Applying GRADE-CERQual to qualitative evidence synthesis findings—paper 3: how to assess methodological limitations
Published in
Implementation Science, January 2018
DOI 10.1186/s13012-017-0690-9
Pubmed ID
Authors

Heather Munthe-Kaas, Meghan A. Bohren, Claire Glenton, Simon Lewin, Jane Noyes, Özge Tunçalp, Andrew Booth, Ruth Garside, Christopher J. Colvin, Megan Wainwright, Arash Rashidian, Signe Flottorp, Benedicte Carlsen

Abstract

The GRADE-CERQual (Confidence in Evidence from Reviews of Qualitative research) approach has been developed by the GRADE (Grading of Recommendations Assessment, Development and Evaluation) Working Group. The approach has been developed to support the use of findings from qualitative evidence syntheses in decision-making, including guideline development and policy formulation. CERQual includes four components for assessing how much confidence to place in findings from reviews of qualitative research (also referred to as qualitative evidence syntheses): (1) methodological limitations, (2) coherence, (3) adequacy of data and (4) relevance. This paper is part of a series providing guidance on how to apply CERQual and focuses on CERQual's methodological limitations component. We developed the methodological limitations component by searching the literature for definitions, gathering feedback from relevant research communities and developing consensus through project group meetings. We tested the CERQual methodological limitations component within several qualitative evidence syntheses before agreeing on the current definition and principles for application. When applying CERQual, we define methodological limitations as the extent to which there are concerns about the design or conduct of the primary studies that contributed evidence to an individual review finding. In this paper, we describe the methodological limitations component and its rationale and offer guidance on how to assess methodological limitations of a review finding as part of the CERQual approach. This guidance outlines the information required to assess methodological limitations component, the steps that need to be taken to assess methodological limitations of data contributing to a review finding and examples of methodological limitation assessments. This paper provides guidance for review authors and others on undertaking an assessment of methodological limitations in the context of the CERQual approach. More work is needed to determine which criteria critical appraisal tools should include when assessing methodological limitations. We currently recommend that whichever tool is used, review authors provide a transparent description of their assessments of methodological limitations in a review finding. We expect the CERQual approach and its individual components to develop further as our experiences with the practical implementation of the approach increase.

X Demographics

X Demographics

The data shown below were collected from the profiles of 9 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 242 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 242 100%

Demographic breakdown

Readers by professional status Count As %
Student > Master 40 17%
Student > Ph. D. Student 38 16%
Researcher 31 13%
Student > Doctoral Student 19 8%
Other 11 5%
Other 50 21%
Unknown 53 22%
Readers by discipline Count As %
Medicine and Dentistry 52 21%
Social Sciences 35 14%
Nursing and Health Professions 32 13%
Psychology 23 10%
Business, Management and Accounting 10 4%
Other 25 10%
Unknown 65 27%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 7. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 16 April 2021.
All research outputs
#5,133,923
of 25,032,929 outputs
Outputs from Implementation Science
#923
of 1,795 outputs
Outputs of similar age
#106,271
of 452,677 outputs
Outputs of similar age from Implementation Science
#33
of 43 outputs
Altmetric has tracked 25,032,929 research outputs across all sources so far. Compared to these this one has done well and is in the 79th percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 1,795 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 14.9. This one is in the 48th percentile – i.e., 48% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 452,677 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 76% of its contemporaries.
We're also able to compare this research output to 43 others from the same source and published within six weeks on either side of this one. This one is in the 25th percentile – i.e., 25% of its contemporaries scored the same or lower than it.