↓ Skip to main content

Multilevel analysis quantifies variation in the experimental effect while optimizing power and preventing false positives

Overview of attention for article published in BMC Neuroscience, December 2015
Altmetric Badge

About this Attention Score

  • Good Attention Score compared to outputs of the same age (75th percentile)
  • Good Attention Score compared to outputs of the same age and source (71st percentile)

Mentioned by

twitter
5 X users
q&a
1 Q&A thread

Readers on

mendeley
115 Mendeley
citeulike
1 CiteULike
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Multilevel analysis quantifies variation in the experimental effect while optimizing power and preventing false positives
Published in
BMC Neuroscience, December 2015
DOI 10.1186/s12868-015-0228-5
Pubmed ID
Authors

Emmeke Aarts, Conor V. Dolan, Matthijs Verhage, Sophie van der Sluis

Abstract

In neuroscience, experimental designs in which multiple measurements are collected in the same research object or treatment facility are common. Such designs result in clustered or nested data. When clusters include measurements from different experimental conditions, both the mean of the dependent variable and the effect of the experimental manipulation may vary over clusters. In practice, this type of cluster-related variation is often overlooked. Not accommodating cluster-related variation can result in inferential errors concerning the overall experimental effect. The exact effect of ignoring the clustered nature of the data depends on the effect of clustering. Using simulation studies we show that cluster-related variation in the experimental effect, if ignored, results in a false positive rate (i.e., Type I error rate) that is appreciably higher (up to ~20-~50 %) than the chosen [Formula: see text]-level (e.g., [Formula: see text] = 0.05). If the effect of clustering is limited to the intercept, the failure to accommodate clustering can result in a loss of statistical power to detect the overall experimental effect. This effect is most pronounced when both the magnitude of the experimental effect and the sample size are small (e.g., ~25 % less power given an experimental effect with effect size d of 0.20, and a sample size of 10 clusters and 5 observations per experimental condition per cluster). When data is collected from a research design in which observations from the same cluster are obtained in different experimental conditions, multilevel analysis should be used to analyze the data. The use of multilevel analysis not only ensures correct statistical interpretation of the overall experimental effect, but also provides a valuable test of the generalizability of the experimental effect over (intrinsically) varying settings, and a means to reveal the cause of cluster-related variation in experimental effect.

X Demographics

X Demographics

The data shown below were collected from the profiles of 5 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 115 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 115 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 32 28%
Researcher 18 16%
Student > Master 14 12%
Student > Doctoral Student 12 10%
Student > Bachelor 7 6%
Other 19 17%
Unknown 13 11%
Readers by discipline Count As %
Neuroscience 20 17%
Agricultural and Biological Sciences 16 14%
Biochemistry, Genetics and Molecular Biology 14 12%
Psychology 13 11%
Medicine and Dentistry 8 7%
Other 25 22%
Unknown 19 17%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 5. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 28 April 2021.
All research outputs
#6,733,450
of 25,375,376 outputs
Outputs from BMC Neuroscience
#286
of 1,293 outputs
Outputs of similar age
#97,320
of 402,720 outputs
Outputs of similar age from BMC Neuroscience
#14
of 46 outputs
Altmetric has tracked 25,375,376 research outputs across all sources so far. This one has received more attention than most of these and is in the 73rd percentile.
So far Altmetric has tracked 1,293 research outputs from this source. They receive a mean Attention Score of 4.7. This one has done well, scoring higher than 76% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 402,720 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 75% of its contemporaries.
We're also able to compare this research output to 46 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 71% of its contemporaries.