↓ Skip to main content

Electronic Laboratory Medicine ordering with evidence-based Order sets in primary care (ELMO study): protocol for a cluster randomised trial

Overview of attention for article published in Implementation Science, December 2017
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (86th percentile)
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

news
1 news outlet
twitter
3 X users

Citations

dimensions_citation
10 Dimensions

Readers on

mendeley
77 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Electronic Laboratory Medicine ordering with evidence-based Order sets in primary care (ELMO study): protocol for a cluster randomised trial
Published in
Implementation Science, December 2017
DOI 10.1186/s13012-017-0685-6
Pubmed ID
Authors

Nicolas Delvaux, An De Sutter, Stijn Van de Velde, Dirk Ramaekers, Steffen Fieuws, Bert Aertgeerts

Abstract

Laboratory testing is an important clinical act with a valuable role in screening, diagnosis, management and monitoring of diseases or therapies. However, inappropriate laboratory test ordering is frequent, burdening health care spending and negatively influencing quality of care. Inappropriate tests may also result in false-positive results and potentially cause excessive downstream activities. Clinical decision support systems (CDSSs) have shown promising results to influence the test-ordering behaviour of physicians and to improve appropriateness. Order sets, a form of CDSS where a limited set of evidence-based tests are proposed for a series of indications, integrated in a computerised physician order entry (CPOE) have been shown to be effective in reducing the volume of ordered laboratory tests but convincing evidence that they influence appropriateness is lacking. The aim of this study is to evaluate the effect of order sets on the quality and quantity of laboratory test orders by physicians. We also aim to evaluate the effect of order sets on diagnostic error and explore the effect on downstream or cascade activities. We will conduct a cluster randomised controlled trial in Belgian primary care practices. The study is powered to measure two outcomes. We will primarily measure the influence of our CDSS on the appropriateness of laboratory test ordering. Additionally, we will also measure the influence on diagnostic error. We will also explore the effects of our intervention on cascade activities due to altered results of inappropriate tests. We have designed a study that should be able to demonstrate whether the CDSS aimed at diagnostic testing is not only able to influence appropriateness but also safe with respect to diagnostic error. These findings will influence a lager, nationwide implementation of this CDSS. ClinicalTrials.gov, NCT02950142 .

X Demographics

X Demographics

The data shown below were collected from the profiles of 3 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 77 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 77 100%

Demographic breakdown

Readers by professional status Count As %
Student > Master 13 17%
Student > Bachelor 8 10%
Other 7 9%
Researcher 7 9%
Student > Ph. D. Student 6 8%
Other 11 14%
Unknown 25 32%
Readers by discipline Count As %
Medicine and Dentistry 23 30%
Nursing and Health Professions 4 5%
Pharmacology, Toxicology and Pharmaceutical Science 4 5%
Unspecified 3 4%
Engineering 2 3%
Other 14 18%
Unknown 27 35%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 12. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 04 January 2024.
All research outputs
#2,972,795
of 25,107,281 outputs
Outputs from Implementation Science
#616
of 1,798 outputs
Outputs of similar age
#62,504
of 452,270 outputs
Outputs of similar age from Implementation Science
#23
of 40 outputs
Altmetric has tracked 25,107,281 research outputs across all sources so far. Compared to these this one has done well and is in the 88th percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 1,798 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 14.8. This one has gotten more attention than average, scoring higher than 65% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 452,270 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 86% of its contemporaries.
We're also able to compare this research output to 40 others from the same source and published within six weeks on either side of this one. This one is in the 45th percentile – i.e., 45% of its contemporaries scored the same or lower than it.