↓ Skip to main content

Applying GRADE-CERQual to qualitative evidence synthesis findings—paper 5: how to assess adequacy of data

Overview of attention for article published in Implementation Science, January 2018
Altmetric Badge

About this Attention Score

  • Good Attention Score compared to outputs of the same age (68th percentile)

Mentioned by

twitter
8 X users

Citations

dimensions_citation
158 Dimensions

Readers on

mendeley
224 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Applying GRADE-CERQual to qualitative evidence synthesis findings—paper 5: how to assess adequacy of data
Published in
Implementation Science, January 2018
DOI 10.1186/s13012-017-0692-7
Pubmed ID
Authors

Claire Glenton, Benedicte Carlsen, Simon Lewin, Heather Munthe-Kaas, Christopher J. Colvin, Özge Tunçalp, Meghan A. Bohren, Jane Noyes, Andrew Booth, Ruth Garside, Arash Rashidian, Signe Flottorp, Megan Wainwright

Abstract

The GRADE-CERQual (Confidence in Evidence from Reviews of Qualitative research) approach has been developed by the GRADE (Grading of Recommendations Assessment, Development and Evaluation) working group. The approach has been developed to support the use of findings from qualitative evidence syntheses in decision-making, including guideline development and policy formulation. CERQual includes four components for assessing how much confidence to place in findings from reviews of qualitative research (also referred to as qualitative evidence syntheses): (1) methodological limitations; (2) coherence; (3) adequacy of data; and (4) relevance. This paper is part of a series providing guidance on how to apply CERQual and focuses on CERQual's adequacy of data component. We developed the adequacy of data component by searching the literature for definitions, gathering feedback from relevant research communities and developing consensus through project group meetings. We tested the CERQual adequacy of data component within several qualitative evidence syntheses before agreeing on the current definition and principles for application. When applying CERQual, we define adequacy of data as an overall determination of the degree of richness and the quantity of data supporting a review finding. In this paper, we describe the adequacy component and its rationale and offer guidance on how to assess data adequacy in the context of a review finding as part of the CERQual approach. This guidance outlines the information required to assess data adequacy, the steps that need to be taken to assess data adequacy, and examples of adequacy assessments. This paper provides guidance for review authors and others on undertaking an assessment of adequacy in the context of the CERQual approach. We approach assessments of data adequacy in terms of the richness and quantity of the data supporting each review finding, but do not offer fixed rules regarding what constitutes sufficiently rich data or an adequate quantity of data. Instead, we recommend that this assessment is made in relation to the nature of the finding. We expect the CERQual approach, and its individual components, to develop further as our experiences with the practical implementation of the approach increase.

X Demographics

X Demographics

The data shown below were collected from the profiles of 8 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 224 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 224 100%

Demographic breakdown

Readers by professional status Count As %
Researcher 33 15%
Student > Ph. D. Student 32 14%
Student > Master 30 13%
Other 12 5%
Student > Doctoral Student 11 5%
Other 51 23%
Unknown 55 25%
Readers by discipline Count As %
Medicine and Dentistry 45 20%
Nursing and Health Professions 31 14%
Social Sciences 31 14%
Psychology 21 9%
Business, Management and Accounting 12 5%
Other 21 9%
Unknown 63 28%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 4. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 01 February 2018.
All research outputs
#6,587,629
of 23,305,591 outputs
Outputs from Implementation Science
#1,121
of 1,728 outputs
Outputs of similar age
#134,893
of 442,666 outputs
Outputs of similar age from Implementation Science
#38
of 43 outputs
Altmetric has tracked 23,305,591 research outputs across all sources so far. This one has received more attention than most of these and is in the 70th percentile.
So far Altmetric has tracked 1,728 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 14.8. This one is in the 34th percentile – i.e., 34% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 442,666 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 68% of its contemporaries.
We're also able to compare this research output to 43 others from the same source and published within six weeks on either side of this one. This one is in the 13th percentile – i.e., 13% of its contemporaries scored the same or lower than it.