↓ Skip to main content

Preliminary testing of the reliability and feasibility of SAGE: a system to measure and score engagement with and use of research in health policies and programs

Overview of attention for article published in Implementation Science, December 2017
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (85th percentile)
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

twitter
20 X users

Citations

dimensions_citation
5 Dimensions

Readers on

mendeley
30 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Preliminary testing of the reliability and feasibility of SAGE: a system to measure and score engagement with and use of research in health policies and programs
Published in
Implementation Science, December 2017
DOI 10.1186/s13012-017-0676-7
Pubmed ID
Authors

Steve R. Makkar, Anna Williamson, Catherine D’Este, Sally Redman

Abstract

Few measures of research use in health policymaking are available, and the reliability of such measures has yet to be evaluated. A new measure called the Staff Assessment of Engagement with Evidence (SAGE) incorporates an interview that explores policymakers' research use within discrete policy documents and a scoring tool that quantifies the extent of policymakers' research use based on the interview transcript and analysis of the policy document itself. We aimed to conduct a preliminary investigation of the usability, sensitivity, and reliability of the scoring tool in measuring research use by policymakers. Nine experts in health policy research and two independent coders were recruited. Each expert used the scoring tool to rate a random selection of 20 interview transcripts, and each independent coder rated 60 transcripts. The distribution of scores among experts was examined, and then, interrater reliability was tested within and between the experts and independent coders. Average- and single-measure reliability coefficients were computed for each SAGE subscales. Experts' scores ranged from the limited to extensive scoring bracket for all subscales. Experts as a group also exhibited at least a fair level of interrater agreement across all subscales. Single-measure reliability was at least fair except for three subscales: Relevance Appraisal, Conceptual Use, and Instrumental Use. Average- and single-measure reliability among independent coders was good to excellent for all subscales. Finally, reliability between experts and independent coders was fair to excellent for all subscales. Among experts, the scoring tool was comprehensible, usable, and sensitive to discriminate between documents with varying degrees of research use. Secondly, the scoring tool yielded scores with good reliability among the independent coders. There was greater variability among experts, although as a group, the tool was fairly reliable. The alignment between experts' and independent coders' ratings indicates that the independent coders were scoring in a manner comparable to health policy research experts. If the present findings are replicated in a larger sample, end users (e.g. policy agency staff) could potentially be trained to use SAGE to reliably score research use within their agencies, which would provide a cost-effective and time-efficient approach to utilising this measure in practice.

X Demographics

X Demographics

The data shown below were collected from the profiles of 20 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 30 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 30 100%

Demographic breakdown

Readers by professional status Count As %
Researcher 8 27%
Student > Ph. D. Student 4 13%
Student > Master 4 13%
Student > Bachelor 1 3%
Librarian 1 3%
Other 1 3%
Unknown 11 37%
Readers by discipline Count As %
Medicine and Dentistry 5 17%
Social Sciences 3 10%
Nursing and Health Professions 2 7%
Business, Management and Accounting 2 7%
Environmental Science 2 7%
Other 6 20%
Unknown 10 33%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 12. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 03 February 2018.
All research outputs
#3,113,698
of 25,765,370 outputs
Outputs from Implementation Science
#626
of 1,821 outputs
Outputs of similar age
#65,053
of 449,615 outputs
Outputs of similar age from Implementation Science
#29
of 49 outputs
Altmetric has tracked 25,765,370 research outputs across all sources so far. Compared to these this one has done well and is in the 87th percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 1,821 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 14.9. This one has gotten more attention than average, scoring higher than 65% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 449,615 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 85% of its contemporaries.
We're also able to compare this research output to 49 others from the same source and published within six weeks on either side of this one. This one is in the 40th percentile – i.e., 40% of its contemporaries scored the same or lower than it.