↓ Skip to main content

Accuracy of using automated methods for detecting adverse events from electronic health record data: a research protocol

Overview of attention for article published in Implementation Science, January 2015
Altmetric Badge

About this Attention Score

  • Above-average Attention Score compared to outputs of the same age (51st percentile)
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

twitter
6 X users

Citations

dimensions_citation
24 Dimensions

Readers on

mendeley
128 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Accuracy of using automated methods for detecting adverse events from electronic health record data: a research protocol
Published in
Implementation Science, January 2015
DOI 10.1186/s13012-014-0197-6
Pubmed ID
Authors

Christian M Rochefort, David L Buckeridge, Alan J Forster

Abstract

BackgroundAdverse events are associated with significant morbidity, mortality and cost in hospitalized patients. Measuring adverse events is necessary for quality improvement, but current detection methods are inaccurate, untimely and expensive. The advent of electronic health records and the development of automated methods for encoding and classifying electronic narrative data, such as natural language processing, offer an opportunity to identify potentially better methods. The objective of this study is to determine the accuracy of using automated methods for detecting three highly prevalent adverse events: a) hospital-acquired pneumonia, b) catheter-associated bloodstream infections, and c) in-hospital falls.Methods/designThis validation study will be conducted at two large Canadian academic health centres: the McGill University Health Centre (MUHC) and The Ottawa Hospital (TOH). The study population consists of all medical, surgical and intensive care unit patients admitted to these centres between 2008 and 2014. An automated detection algorithm will be developed and validated for each of the three adverse events using electronic data extracted from multiple clinical databases. A random sample of MUHC patients will be used to develop the automated detection algorithms (cohort 1, development set). The accuracy of these algorithms will be assessed using chart review as the reference standard. Then, receiver operating characteristic curves will be used to identify optimal cut points for each of the data sources. Multivariate logistic regression and the areas under curve (AUC) will be used to identify the optimal combination of data sources that maximize the accuracy of adverse event detection. The most accurate algorithms will then be validated on a second random sample of MUHC patients (cohort 1, validation set), and accuracy will be measured using chart review as the reference standard. The most accurate algorithms validated at the MUHC will then be applied to TOH data (cohort 2), and their accuracy will be assessed using a reference standard assessment of the medical chart.DiscussionThere is a need for more accurate, timely and efficient measures of adverse events in acute care hospitals. This is a critical requirement for evaluating the effectiveness of preventive interventions and for tracking progress in patient safety through time.

X Demographics

X Demographics

The data shown below were collected from the profiles of 6 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 128 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United Kingdom 1 <1%
Canada 1 <1%
Unknown 126 98%

Demographic breakdown

Readers by professional status Count As %
Researcher 27 21%
Student > Ph. D. Student 19 15%
Student > Master 18 14%
Student > Postgraduate 9 7%
Other 7 5%
Other 23 18%
Unknown 25 20%
Readers by discipline Count As %
Medicine and Dentistry 41 32%
Computer Science 15 12%
Nursing and Health Professions 9 7%
Business, Management and Accounting 3 2%
Social Sciences 3 2%
Other 18 14%
Unknown 39 30%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 3. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 11 January 2015.
All research outputs
#8,535,684
of 25,374,917 outputs
Outputs from Implementation Science
#1,319
of 1,809 outputs
Outputs of similar age
#110,042
of 358,881 outputs
Outputs of similar age from Implementation Science
#31
of 47 outputs
Altmetric has tracked 25,374,917 research outputs across all sources so far. This one is in the 43rd percentile – i.e., 43% of other outputs scored the same or lower than it.
So far Altmetric has tracked 1,809 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 14.9. This one is in the 24th percentile – i.e., 24% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 358,881 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 51% of its contemporaries.
We're also able to compare this research output to 47 others from the same source and published within six weeks on either side of this one. This one is in the 34th percentile – i.e., 34% of its contemporaries scored the same or lower than it.