↓ Skip to main content

Applying GRADE-CERQual to qualitative evidence synthesis findings—paper 4: how to assess coherence

Overview of attention for article published in Implementation Science, January 2018
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • Good Attention Score compared to outputs of the same age (78th percentile)

Mentioned by

twitter
14 X users

Citations

dimensions_citation
146 Dimensions

Readers on

mendeley
212 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Applying GRADE-CERQual to qualitative evidence synthesis findings—paper 4: how to assess coherence
Published in
Implementation Science, January 2018
DOI 10.1186/s13012-017-0691-8
Pubmed ID
Authors

Christopher J. Colvin, Ruth Garside, Megan Wainwright, Heather Munthe-Kaas, Claire Glenton, Meghan A. Bohren, Benedicte Carlsen, Özge Tunçalp, Jane Noyes, Andrew Booth, Arash Rashidian, Signe Flottorp, Simon Lewin

Abstract

The GRADE-CERQual (Grading of Recommendations Assessment, Development and Evaluation-Confidence in Evidence from Reviews of Qualitative research) approach has been developed by the GRADE working group. The approach has been developed to support the use of findings from qualitative evidence syntheses in decision-making, including guideline development and policy formulation. CERQual includes four components for assessing how much confidence to place in findings from reviews of qualitative research (also referred to as qualitative evidence syntheses): (1) methodological limitations, (2) relevance, (3) coherence and (4) adequacy of data. This paper is part of a series providing guidance on how to apply CERQual and focuses on CERQual's coherence component. We developed the coherence component by searching the literature for definitions, gathering feedback from relevant research communities and developing consensus through project group meetings. We tested the CERQual coherence component within several qualitative evidence syntheses before agreeing on the current definition and principles for application. When applying CERQual, we define coherence as how clear and cogent the fit is between the data from the primary studies and a review finding that synthesises that data. In this paper, we describe the coherence component and its rationale and offer guidance on how to assess coherence in the context of a review finding as part of the CERQual approach. This guidance outlines the information required to assess coherence, the steps that need to be taken to assess coherence and examples of coherence assessments. This paper provides guidance for review authors and others on undertaking an assessment of coherence in the context of the CERQual approach. We suggest that threats to coherence may arise when the data supporting a review finding are contradictory, ambiguous or incomplete or where competing theories exist that could be used to synthesise the data. We expect the CERQual approach, and its individual components, to develop further as our experiences with the practical implementation of the approach increase.

X Demographics

X Demographics

The data shown below were collected from the profiles of 14 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 212 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 212 100%

Demographic breakdown

Readers by professional status Count As %
Researcher 37 17%
Student > Ph. D. Student 33 16%
Student > Master 30 14%
Student > Doctoral Student 15 7%
Other 10 5%
Other 40 19%
Unknown 47 22%
Readers by discipline Count As %
Medicine and Dentistry 49 23%
Social Sciences 31 15%
Nursing and Health Professions 26 12%
Psychology 22 10%
Business, Management and Accounting 7 3%
Other 20 9%
Unknown 57 27%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 8. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 06 February 2018.
All research outputs
#4,268,114
of 23,316,003 outputs
Outputs from Implementation Science
#850
of 1,728 outputs
Outputs of similar age
#94,646
of 442,739 outputs
Outputs of similar age from Implementation Science
#32
of 43 outputs
Altmetric has tracked 23,316,003 research outputs across all sources so far. Compared to these this one has done well and is in the 81st percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 1,728 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 14.8. This one has gotten more attention than average, scoring higher than 50% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 442,739 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 78% of its contemporaries.
We're also able to compare this research output to 43 others from the same source and published within six weeks on either side of this one. This one is in the 27th percentile – i.e., 27% of its contemporaries scored the same or lower than it.