↓ Skip to main content

When do confounding by indication and inadequate risk adjustment bias critical care studies? A simulation study

Overview of attention for article published in Critical Care, December 2015
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (90th percentile)
  • Good Attention Score compared to outputs of the same age and source (65th percentile)

Citations

dimensions_citation
50 Dimensions

Readers on

mendeley
49 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
When do confounding by indication and inadequate risk adjustment bias critical care studies? A simulation study
Published in
Critical Care, December 2015
DOI 10.1186/s13054-015-0923-8
Pubmed ID
Authors

Michael W Sjoding, Kaiyi Luo, Melissa A Miller, Theodore J Iwashyna

Abstract

In critical care observational studies, when clinicians administer different treatments to sicker patients, any treatment comparisons will be confounded by differences in severity of illness between patients. We sought to investigate the extent that observational studies assessing treatments are at risk of incorrectly concluding such treatments are ineffective or even harmful due to inadequate risk adjustment. We performed Monte Carlo simulations of observational studies evaluating the effect of a hypothetical treatment on mortality in critically ill patients. We set the treatment to have either no association with mortality or to have a truly beneficial effect, but more often administered to sicker patients. We varied the strength of the treatment's true effect, strength of confounding, study size, patient population, and accuracy of the severity of illness risk-adjustment (area under the receiver operator characteristics curve, AUROC). We measured rates in which studies made inaccurate conclusions about the treatment's true effect due to confounding, and the measured odds ratios for mortality for such false associations. Simulated observational studies employing adequate risk-adjustment were generally able to measure a treatment's true effect. As risk-adjustment worsened, rates of studies incorrectly concluding the treatment provided no benefit or harm increased, especially when sample size was large (n = 10,000). Even in scenarios of only low confounding, studies using the lower accuracy risk-adjustors (AUROC < 0.66) falsely concluded that a beneficial treatment was harmful. Measured odds ratios for mortality of 1.4 or higher were possible when the treatment's true beneficial effect was an odds ratio for mortality of 0.6 or 0.8. Large observational studies confounded by severity of illness have a high likelihood of obtaining incorrect results even after employing conventionally "acceptable" levels of risk-adjustment, with large effect sizes that may be construed as true associations. Reporting the AUROC of the risk-adjustment used in the analysis may facilitate an evaluation of a study's risk for confounding.

X Demographics

X Demographics

The data shown below were collected from the profiles of 20 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 49 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Brazil 1 2%
Unknown 48 98%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 11 22%
Other 9 18%
Researcher 8 16%
Student > Master 3 6%
Professor > Associate Professor 2 4%
Other 5 10%
Unknown 11 22%
Readers by discipline Count As %
Medicine and Dentistry 23 47%
Pharmacology, Toxicology and Pharmaceutical Science 2 4%
Nursing and Health Professions 1 2%
Agricultural and Biological Sciences 1 2%
Business, Management and Accounting 1 2%
Other 4 8%
Unknown 17 35%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 15. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 08 May 2022.
All research outputs
#2,384,968
of 25,371,288 outputs
Outputs from Critical Care
#2,087
of 6,554 outputs
Outputs of similar age
#38,342
of 395,397 outputs
Outputs of similar age from Critical Care
#159
of 466 outputs
Altmetric has tracked 25,371,288 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 90th percentile: it's in the top 10% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 6,554 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 20.8. This one has gotten more attention than average, scoring higher than 68% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 395,397 tracked outputs that were published within six weeks on either side of this one in any source. This one has done particularly well, scoring higher than 90% of its contemporaries.
We're also able to compare this research output to 466 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 65% of its contemporaries.