↓ Skip to main content

Automation bias in electronic prescribing

Overview of attention for article published in BMC Medical Informatics and Decision Making, March 2017
Altmetric Badge

About this Attention Score

  • In the top 5% of all research outputs scored by Altmetric
  • Among the highest-scoring outputs from this source (#32 of 2,138)
  • High Attention Score compared to outputs of the same age (93rd percentile)
  • High Attention Score compared to outputs of the same age and source (96th percentile)

Mentioned by

blogs
2 blogs
twitter
38 X users
facebook
3 Facebook pages

Citations

dimensions_citation
57 Dimensions

Readers on

mendeley
135 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Automation bias in electronic prescribing
Published in
BMC Medical Informatics and Decision Making, March 2017
DOI 10.1186/s12911-017-0425-5
Pubmed ID
Authors

David Lyell, Farah Magrabi, Magdalena Z. Raban, L.G. Pont, Melissa T. Baysari, Richard O. Day, Enrico Coiera

Abstract

Clinical decision support (CDS) in e-prescribing can improve safety by alerting potential errors, but introduces new sources of risk. Automation bias (AB) occurs when users over-rely on CDS, reducing vigilance in information seeking and processing. Evidence of AB has been found in other clinical tasks, but has not yet been tested with e-prescribing. This study tests for the presence of AB in e-prescribing and the impact of task complexity and interruptions on AB. One hundred and twenty students in the final two years of a medical degree prescribed medicines for nine clinical scenarios using a simulated e-prescribing system. Quality of CDS (correct, incorrect and no CDS) and task complexity (low, low + interruption and high) were varied between conditions. Omission errors (failure to detect prescribing errors) and commission errors (acceptance of false positive alerts) were measured. Compared to scenarios with no CDS, correct CDS reduced omission errors by 38.3% (p < .0001, n = 120), 46.6% (p < .0001, n = 70), and 39.2% (p < .0001, n = 120) for low, low + interrupt and high complexity scenarios respectively. Incorrect CDS increased omission errors by 33.3% (p < .0001, n = 120), 24.5% (p < .009, n = 82), and 26.7% (p < .0001, n = 120). Participants made commission errors, 65.8% (p < .0001, n = 120), 53.5% (p < .0001, n = 82), and 51.7% (p < .0001, n = 120). Task complexity and interruptions had no impact on AB. This study found evidence of AB omission and commission errors in e-prescribing. Verification of CDS alerts is key to avoiding AB errors. However, interventions focused on this have had limited success to date. Clinicians should remain vigilant to the risks of CDS failures and verify CDS.

X Demographics

X Demographics

The data shown below were collected from the profiles of 38 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 135 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 135 100%

Demographic breakdown

Readers by professional status Count As %
Student > Master 17 13%
Researcher 16 12%
Student > Ph. D. Student 15 11%
Student > Doctoral Student 15 11%
Student > Bachelor 9 7%
Other 24 18%
Unknown 39 29%
Readers by discipline Count As %
Medicine and Dentistry 23 17%
Psychology 10 7%
Nursing and Health Professions 10 7%
Computer Science 8 6%
Engineering 7 5%
Other 27 20%
Unknown 50 37%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 41. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 10 October 2022.
All research outputs
#988,919
of 25,295,968 outputs
Outputs from BMC Medical Informatics and Decision Making
#32
of 2,138 outputs
Outputs of similar age
#20,076
of 314,641 outputs
Outputs of similar age from BMC Medical Informatics and Decision Making
#2
of 31 outputs
Altmetric has tracked 25,295,968 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 96th percentile: it's in the top 5% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 2,138 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 5.3. This one has done particularly well, scoring higher than 98% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 314,641 tracked outputs that were published within six weeks on either side of this one in any source. This one has done particularly well, scoring higher than 93% of its contemporaries.
We're also able to compare this research output to 31 others from the same source and published within six weeks on either side of this one. This one has done particularly well, scoring higher than 96% of its contemporaries.