↓ Skip to main content

“Scaling-out” evidence-based interventions to new populations or new health care delivery systems

Overview of attention for article published in Implementation Science, September 2017
Altmetric Badge

About this Attention Score

  • In the top 5% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (94th percentile)

Mentioned by

twitter
85 tweeters

Citations

dimensions_citation
185 Dimensions

Readers on

mendeley
265 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
“Scaling-out” evidence-based interventions to new populations or new health care delivery systems
Published in
Implementation Science, September 2017
DOI 10.1186/s13012-017-0640-6
Pubmed ID
Authors

Gregory A. Aarons, Marisa Sklar, Brian Mustanski, Nanette Benbow, C. Hendricks Brown

Abstract

Implementing treatments and interventions with demonstrated effectiveness is critical for improving patient health outcomes at a reduced cost. When an evidence-based intervention (EBI) is implemented with fidelity in a setting that is very similar to the setting wherein it was previously found to be effective, it is reasonable to anticipate similar benefits of that EBI. However, one goal of implementation science is to expand the use of EBIs as broadly as is feasible and appropriate in order to foster the greatest public health impact. When implementing an EBI in a novel setting, or targeting novel populations, one must consider whether there is sufficient justification that the EBI would have similar benefits to those found in earlier trials. In this paper, we introduce a new concept for implementation called "scaling-out" when EBIs are adapted either to new populations or new delivery systems, or both. Using existing external validity theories and multilevel mediation modeling, we provide a logical framework for determining what new empirical evidence is required for an intervention to retain its evidence-based standard in this new context. The motivating questions are whether scale-out can reasonably be expected to produce population-level effectiveness as found in previous studies, and what additional empirical evaluations would be necessary to test for this short of an entirely new effectiveness trial. We present evaluation options for assessing whether scaling-out results in the ultimate health outcome of interest. In scaling to health or service delivery systems or population/community contexts that are different from the setting where the EBI was originally tested, there are situations where a shorter timeframe of translation is possible. We argue that implementation of an EBI in a moderately different setting or with a different population can sometimes "borrow strength" from evidence of impact in a prior effectiveness trial. The collection of additional empirical data is deemed necessary by the nature and degree of adaptations to the EBI and the context. Our argument in this paper is conceptual, and we propose formal empirical tests of mediational equivalence in a follow-up paper.

Twitter Demographics

The data shown below were collected from the profiles of 85 tweeters who shared this research output. Click here to find out more about how the information was compiled.

Mendeley readers

The data shown below were compiled from readership statistics for 265 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 265 100%

Demographic breakdown

Readers by professional status Count As %
Researcher 57 22%
Student > Ph. D. Student 38 14%
Student > Master 27 10%
Student > Doctoral Student 24 9%
Other 20 8%
Other 53 20%
Unknown 46 17%
Readers by discipline Count As %
Medicine and Dentistry 55 21%
Social Sciences 44 17%
Psychology 43 16%
Nursing and Health Professions 27 10%
Unspecified 5 2%
Other 31 12%
Unknown 60 23%

Attention Score in Context

This research output has an Altmetric Attention Score of 49. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 23 November 2021.
All research outputs
#650,828
of 21,243,519 outputs
Outputs from Implementation Science
#89
of 1,679 outputs
Outputs of similar age
#15,705
of 290,479 outputs
Outputs of similar age from Implementation Science
#1
of 2 outputs
Altmetric has tracked 21,243,519 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 96th percentile: it's in the top 5% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 1,679 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 14.7. This one has done particularly well, scoring higher than 94% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 290,479 tracked outputs that were published within six weeks on either side of this one in any source. This one has done particularly well, scoring higher than 94% of its contemporaries.
We're also able to compare this research output to 2 others from the same source and published within six weeks on either side of this one. This one has scored higher than all of them