↓ Skip to main content

Expanding the evidence base for global recommendations on health systems: strengths and challenges of the OptimizeMNH guidance process

Overview of attention for article published in Implementation Science, July 2016
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • Good Attention Score compared to outputs of the same age (79th percentile)
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

policy
1 policy source
twitter
8 X users

Citations

dimensions_citation
28 Dimensions

Readers on

mendeley
71 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Expanding the evidence base for global recommendations on health systems: strengths and challenges of the OptimizeMNH guidance process
Published in
Implementation Science, July 2016
DOI 10.1186/s13012-016-0470-y
Pubmed ID
Authors

Claire Glenton, Simon Lewin, Ahmet Metin Gülmezoglu

Abstract

In 2012, the World Health Organization (WHO) published recommendations on the use of optimization or "task-shifting" strategies for key, effective maternal and newborn interventions (the OptimizeMNH guidance). When making recommendations about complex health system interventions such as task-shifting, information about the feasibility and acceptability of interventions can be as important as information about their effectiveness. However, these issues are usually not addressed with the same rigour. This paper describes our use of several innovative strategies to broaden the range of evidence used to develop the OptimizeMNH guidance. In this guidance, we systematically included evidence regarding the acceptability and feasibility of relevant task-shifting interventions, primarily using qualitative evidence syntheses and multi-country case study syntheses; we used an approach to assess confidence in findings from qualitative evidence syntheses (the Grading of Recommendations, Assessment, Development and Evaluation-Confidence in Evidence from Reviews of Qualitative Research (GRADE-CERQual) approach); we used a structured evidence-to-decision framework for health systems (the DECIDE framework) to help the guidance panel members move from the different types of evidence to recommendations. The systematic inclusion of a broader range of evidence, and the use of new guideline development tools, had a number of impacts. Firstly, this broader range of evidence provided relevant information about the feasibility and acceptability of interventions considered in the guidance as well as information about key implementation considerations. However, inclusion of this evidence required more time, resources and skills. Secondly, the GRADE-CERQual approach provided a method for indicating to panel members how much confidence they should place in the findings from the qualitative evidence syntheses and so helped panel members to use this qualitative evidence appropriately. Thirdly, the DECIDE framework gave us a structured format in which we could present a large and complex body of evidence to panel members and end users. The framework also prompted the panel to justify their recommendations, giving end users a record of how these decisions were made. By expanding the range of evidence assessed in a guideline process, we increase the amount of time and resources required. Nevertheless, the WHO has assessed the outputs of this process to be valuable and is currently repeating the approach used in OptimizeMNH in other guidance processes.

X Demographics

X Demographics

The data shown below were collected from the profiles of 8 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 71 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United Kingdom 1 1%
United States 1 1%
Unknown 69 97%

Demographic breakdown

Readers by professional status Count As %
Researcher 13 18%
Student > Bachelor 8 11%
Student > Ph. D. Student 6 8%
Student > Master 6 8%
Student > Doctoral Student 6 8%
Other 14 20%
Unknown 18 25%
Readers by discipline Count As %
Medicine and Dentistry 24 34%
Nursing and Health Professions 9 13%
Social Sciences 5 7%
Psychology 3 4%
Unspecified 3 4%
Other 6 8%
Unknown 21 30%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 8. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 14 December 2022.
All research outputs
#4,132,566
of 23,577,654 outputs
Outputs from Implementation Science
#806
of 1,728 outputs
Outputs of similar age
#73,341
of 365,549 outputs
Outputs of similar age from Implementation Science
#17
of 27 outputs
Altmetric has tracked 23,577,654 research outputs across all sources so far. Compared to these this one has done well and is in the 82nd percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 1,728 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 14.8. This one has gotten more attention than average, scoring higher than 53% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 365,549 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 79% of its contemporaries.
We're also able to compare this research output to 27 others from the same source and published within six weeks on either side of this one. This one is in the 37th percentile – i.e., 37% of its contemporaries scored the same or lower than it.