↓ Skip to main content

“Scaling-out” evidence-based interventions to new populations or new health care delivery systems

Overview of attention for article published in Implementation Science, September 2017
Altmetric Badge

About this Attention Score

  • In the top 5% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (94th percentile)

Mentioned by

twitter
86 tweeters

Citations

dimensions_citation
150 Dimensions

Readers on

mendeley
238 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
“Scaling-out” evidence-based interventions to new populations or new health care delivery systems
Published in
Implementation Science, September 2017
DOI 10.1186/s13012-017-0640-6
Pubmed ID
Authors

Gregory A. Aarons, Marisa Sklar, Brian Mustanski, Nanette Benbow, C. Hendricks Brown

Abstract

Implementing treatments and interventions with demonstrated effectiveness is critical for improving patient health outcomes at a reduced cost. When an evidence-based intervention (EBI) is implemented with fidelity in a setting that is very similar to the setting wherein it was previously found to be effective, it is reasonable to anticipate similar benefits of that EBI. However, one goal of implementation science is to expand the use of EBIs as broadly as is feasible and appropriate in order to foster the greatest public health impact. When implementing an EBI in a novel setting, or targeting novel populations, one must consider whether there is sufficient justification that the EBI would have similar benefits to those found in earlier trials. In this paper, we introduce a new concept for implementation called "scaling-out" when EBIs are adapted either to new populations or new delivery systems, or both. Using existing external validity theories and multilevel mediation modeling, we provide a logical framework for determining what new empirical evidence is required for an intervention to retain its evidence-based standard in this new context. The motivating questions are whether scale-out can reasonably be expected to produce population-level effectiveness as found in previous studies, and what additional empirical evaluations would be necessary to test for this short of an entirely new effectiveness trial. We present evaluation options for assessing whether scaling-out results in the ultimate health outcome of interest. In scaling to health or service delivery systems or population/community contexts that are different from the setting where the EBI was originally tested, there are situations where a shorter timeframe of translation is possible. We argue that implementation of an EBI in a moderately different setting or with a different population can sometimes "borrow strength" from evidence of impact in a prior effectiveness trial. The collection of additional empirical data is deemed necessary by the nature and degree of adaptations to the EBI and the context. Our argument in this paper is conceptual, and we propose formal empirical tests of mediational equivalence in a follow-up paper.

Twitter Demographics

The data shown below were collected from the profiles of 86 tweeters who shared this research output. Click here to find out more about how the information was compiled.

Mendeley readers

The data shown below were compiled from readership statistics for 238 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 238 100%

Demographic breakdown

Readers by professional status Count As %
Researcher 52 22%
Student > Ph. D. Student 35 15%
Student > Master 26 11%
Student > Doctoral Student 22 9%
Other 18 8%
Other 47 20%
Unknown 38 16%
Readers by discipline Count As %
Medicine and Dentistry 54 23%
Psychology 41 17%
Social Sciences 40 17%
Nursing and Health Professions 26 11%
Arts and Humanities 4 2%
Other 22 9%
Unknown 51 21%

Attention Score in Context

This research output has an Altmetric Attention Score of 49. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 23 November 2021.
All research outputs
#594,586
of 19,519,615 outputs
Outputs from Implementation Science
#87
of 1,641 outputs
Outputs of similar age
#15,451
of 285,861 outputs
Outputs of similar age from Implementation Science
#1
of 2 outputs
Altmetric has tracked 19,519,615 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 96th percentile: it's in the top 5% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 1,641 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 14.4. This one has done particularly well, scoring higher than 94% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 285,861 tracked outputs that were published within six weeks on either side of this one in any source. This one has done particularly well, scoring higher than 94% of its contemporaries.
We're also able to compare this research output to 2 others from the same source and published within six weeks on either side of this one. This one has scored higher than all of them