↓ Skip to main content

Failure of a numerical quality assessment scale to identify potential risk of bias in a systematic review: a comparison study

Overview of attention for article published in BMC Research Notes, June 2015
Altmetric Badge

About this Attention Score

  • Above-average Attention Score compared to outputs of the same age (55th percentile)
  • Good Attention Score compared to outputs of the same age and source (74th percentile)

Mentioned by

twitter
5 X users

Citations

dimensions_citation
185 Dimensions

Readers on

mendeley
232 Mendeley
citeulike
1 CiteULike
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Failure of a numerical quality assessment scale to identify potential risk of bias in a systematic review: a comparison study
Published in
BMC Research Notes, June 2015
DOI 10.1186/s13104-015-1181-1
Pubmed ID
Authors

Seán R O’Connor, Mark A Tully, Brigid Ryan, Judy M Bradley, George D Baxter, Suzanne M McDonough

Abstract

Assessing methodological quality of primary studies is an essential component of systematic reviews. Following a systematic review which used a domain based system [United States Preventative Services Task Force (USPSTF)] to assess methodological quality, a commonly used numerical rating scale (Downs and Black) was also used to evaluate the included studies and comparisons were made between quality ratings assigned using the two different methods. Both tools were used to assess the 20 randomized and quasi-randomized controlled trials examining an exercise intervention for chronic musculoskeletal pain which were included in the review. Inter-rater reliability and levels of agreement were determined using intraclass correlation coefficients (ICC). Influence of quality on pooled effect size was examined by calculating the between group standardized mean difference (SMD). Inter-rater reliability indicated at least substantial levels of agreement for the USPSTF system (ICC 0.85; 95% CI 0.66, 0.94) and Downs and Black scale (ICC 0.94; 95% CI 0.84, 0.97). Overall level of agreement between tools (ICC 0.80; 95% CI 0.57, 0.92) was also good. However, the USPSTF system identified a number of studies (n = 3/20) as "poor" due to potential risks of bias. Analysis revealed substantially greater pooled effect sizes in these studies (SMD -2.51; 95% CI -4.21, -0.82) compared to those rated as "fair" (SMD -0.45; 95% CI -0.65, -0.25) or "good" (SMD -0.38; 95% CI -0.69, -0.08). In this example, use of a numerical rating scale failed to identify studies at increased risk of bias, and could have potentially led to imprecise estimates of treatment effect. Although based on a small number of included studies within an existing systematic review, we found the domain based system provided a more structured framework by which qualitative decisions concerning overall quality could be made, and was useful for detecting potential sources of bias in the available evidence.

X Demographics

X Demographics

The data shown below were collected from the profiles of 5 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 232 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 232 100%

Demographic breakdown

Readers by professional status Count As %
Student > Master 44 19%
Student > Bachelor 37 16%
Student > Ph. D. Student 30 13%
Researcher 24 10%
Student > Postgraduate 15 6%
Other 33 14%
Unknown 49 21%
Readers by discipline Count As %
Medicine and Dentistry 55 24%
Nursing and Health Professions 43 19%
Psychology 21 9%
Sports and Recreations 10 4%
Social Sciences 8 3%
Other 32 14%
Unknown 63 27%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 3. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 11 March 2016.
All research outputs
#12,929,245
of 23,312,088 outputs
Outputs from BMC Research Notes
#1,514
of 4,306 outputs
Outputs of similar age
#116,913
of 267,712 outputs
Outputs of similar age from BMC Research Notes
#21
of 78 outputs
Altmetric has tracked 23,312,088 research outputs across all sources so far. This one is in the 44th percentile – i.e., 44% of other outputs scored the same or lower than it.
So far Altmetric has tracked 4,306 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 5.7. This one has gotten more attention than average, scoring higher than 64% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 267,712 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 55% of its contemporaries.
We're also able to compare this research output to 78 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 74% of its contemporaries.