↓ Skip to main content

Applying GRADE-CERQual to qualitative evidence synthesis findings—paper 4: how to assess coherence

Overview of attention for article published in Implementation Science, January 2018
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • Good Attention Score compared to outputs of the same age (75th percentile)
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

14 tweeters


90 Dimensions

Readers on

163 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Applying GRADE-CERQual to qualitative evidence synthesis findings—paper 4: how to assess coherence
Published in
Implementation Science, January 2018
DOI 10.1186/s13012-017-0691-8
Pubmed ID

Christopher J. Colvin, Ruth Garside, Megan Wainwright, Heather Munthe-Kaas, Claire Glenton, Meghan A. Bohren, Benedicte Carlsen, Özge Tunçalp, Jane Noyes, Andrew Booth, Arash Rashidian, Signe Flottorp, Simon Lewin


The GRADE-CERQual (Grading of Recommendations Assessment, Development and Evaluation-Confidence in Evidence from Reviews of Qualitative research) approach has been developed by the GRADE working group. The approach has been developed to support the use of findings from qualitative evidence syntheses in decision-making, including guideline development and policy formulation. CERQual includes four components for assessing how much confidence to place in findings from reviews of qualitative research (also referred to as qualitative evidence syntheses): (1) methodological limitations, (2) relevance, (3) coherence and (4) adequacy of data. This paper is part of a series providing guidance on how to apply CERQual and focuses on CERQual's coherence component. We developed the coherence component by searching the literature for definitions, gathering feedback from relevant research communities and developing consensus through project group meetings. We tested the CERQual coherence component within several qualitative evidence syntheses before agreeing on the current definition and principles for application. When applying CERQual, we define coherence as how clear and cogent the fit is between the data from the primary studies and a review finding that synthesises that data. In this paper, we describe the coherence component and its rationale and offer guidance on how to assess coherence in the context of a review finding as part of the CERQual approach. This guidance outlines the information required to assess coherence, the steps that need to be taken to assess coherence and examples of coherence assessments. This paper provides guidance for review authors and others on undertaking an assessment of coherence in the context of the CERQual approach. We suggest that threats to coherence may arise when the data supporting a review finding are contradictory, ambiguous or incomplete or where competing theories exist that could be used to synthesise the data. We expect the CERQual approach, and its individual components, to develop further as our experiences with the practical implementation of the approach increase.

Twitter Demographics

The data shown below were collected from the profiles of 14 tweeters who shared this research output. Click here to find out more about how the information was compiled.

Mendeley readers

The data shown below were compiled from readership statistics for 163 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 163 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 31 19%
Researcher 29 18%
Student > Master 27 17%
Student > Doctoral Student 12 7%
Professor 9 6%
Other 34 21%
Unknown 21 13%
Readers by discipline Count As %
Medicine and Dentistry 41 25%
Social Sciences 28 17%
Nursing and Health Professions 23 14%
Psychology 20 12%
Business, Management and Accounting 4 2%
Other 16 10%
Unknown 31 19%

Attention Score in Context

This research output has an Altmetric Attention Score of 7. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 06 February 2018.
All research outputs
of 12,470,921 outputs
Outputs from Implementation Science
of 1,306 outputs
Outputs of similar age
of 339,933 outputs
Outputs of similar age from Implementation Science
of 29 outputs
Altmetric has tracked 12,470,921 research outputs across all sources so far. Compared to these this one has done well and is in the 82nd percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 1,306 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 13.0. This one has gotten more attention than average, scoring higher than 52% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 339,933 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 75% of its contemporaries.
We're also able to compare this research output to 29 others from the same source and published within six weeks on either side of this one. This one is in the 34th percentile – i.e., 34% of its contemporaries scored the same or lower than it.