↓ Skip to main content

Capacity building for evidence-based decision making in local health departments: scaling up an effective training approach

Overview of attention for article published in Implementation Science, September 2014
Altmetric Badge

About this Attention Score

  • Good Attention Score compared to outputs of the same age (73rd percentile)
  • Above-average Attention Score compared to outputs of the same age and source (54th percentile)

Mentioned by

twitter
7 X users

Citations

dimensions_citation
51 Dimensions

Readers on

mendeley
127 Mendeley
citeulike
2 CiteULike
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Capacity building for evidence-based decision making in local health departments: scaling up an effective training approach
Published in
Implementation Science, September 2014
DOI 10.1186/s13012-014-0124-x
Pubmed ID
Authors

Julie A Jacobs, Kathleen Duggan, Paul Erwin, Carson Smith, Elaine Borawski, Judy Compton, Luann D’Ambrosio, Scott H Frank, Susan Frazier-Kouassi, Peggy A Hannon, Jennifer Leeman, Avia Mainor, Ross C Brownson

Abstract

BackgroundThere are few studies describing how to scale up effective capacity-building approaches for public health practitioners. This study tested local-level evidence-based decision making (EBDM) capacity-building efforts in four U.S. states (Michigan, North Carolina, Ohio, and Washington) with a quasi-experimental design.MethodsPartners within the four states delivered a previously established Evidence-Based Public Health (EBPH) training curriculum to local health department (LHD) staff. They worked with the research team to modify the curriculum with local data and examples while remaining attentive to course fidelity. Pre- and post-assessments of course participants (n¿=¿82) and an external control group (n¿=¿214) measured importance, availability (i.e., how available a skill is when needed, either within the skillset of the respondent or among others in the agency), and gaps in ten EBDM competencies. Simple and multiple linear regression models assessed the differences between pre- and post-assessment scores. Course participants also assessed the impact of the course on their work.ResultsCourse participants reported greater increases in the availability, and decreases in the gaps, in EBDM competencies at post-test, relative to the control group. In adjusted models, significant differences (p¿<¿0.05) were found in `action planning,¿ `evaluation design,¿ `communicating research to policymakers,¿ `quantifying issues (using descriptive epidemiology),¿ and `economic evaluation.¿ Nearly 45% of participants indicated that EBDM increased within their agency since the training. Course benefits included becoming better leaders and making scientifically informed decisions.ConclusionsThis study demonstrates the potential for improving EBDM capacity among LHD practitioners using a train-the-trainer approach involving diverse partners. This approach allowed for local tailoring of strategies and extended the reach of the EBPH course.

X Demographics

X Demographics

The data shown below were collected from the profiles of 7 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 127 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United Kingdom 1 <1%
United States 1 <1%
South Africa 1 <1%
Unknown 124 98%

Demographic breakdown

Readers by professional status Count As %
Student > Master 21 17%
Researcher 19 15%
Student > Doctoral Student 17 13%
Student > Ph. D. Student 14 11%
Other 11 9%
Other 22 17%
Unknown 23 18%
Readers by discipline Count As %
Medicine and Dentistry 27 21%
Social Sciences 23 18%
Nursing and Health Professions 14 11%
Psychology 10 8%
Business, Management and Accounting 4 3%
Other 18 14%
Unknown 31 24%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 5. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 26 June 2015.
All research outputs
#7,104,817
of 25,311,095 outputs
Outputs from Implementation Science
#1,136
of 1,798 outputs
Outputs of similar age
#69,098
of 259,423 outputs
Outputs of similar age from Implementation Science
#28
of 61 outputs
Altmetric has tracked 25,311,095 research outputs across all sources so far. This one has received more attention than most of these and is in the 71st percentile.
So far Altmetric has tracked 1,798 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 14.9. This one is in the 35th percentile – i.e., 35% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 259,423 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 73% of its contemporaries.
We're also able to compare this research output to 61 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 54% of its contemporaries.