↓ Skip to main content

The testing effect for mediator final test cues and related final test cues in online and laboratory experiments

Overview of attention for article published in BMC Psychology, May 2016
Altmetric Badge

Mentioned by

twitter
1 X user

Citations

dimensions_citation
10 Dimensions

Readers on

mendeley
31 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
The testing effect for mediator final test cues and related final test cues in online and laboratory experiments
Published in
BMC Psychology, May 2016
DOI 10.1186/s40359-016-0127-2
Pubmed ID
Authors

Leonora C. Coppens, Peter P. J. L. Verkoeijen, Samantha Bouwmeester, Remy M. J. P. Rikers

Abstract

The testing effect is the finding that information that is retrieved during learning is more often correctly retrieved on a final test than information that is restudied. According to the semantic mediator hypothesis the testing effect arises because retrieval practice of cue-target pairs (mother-child) activates semantically related mediators (father) more than restudying. Hence, the mediator-target (father-child) association should be stronger for retrieved than restudied pairs. Indeed, Carpenter (2011) found a larger testing effect when participants received mediators (father) than when they received target-related words (birth) as final test cues. The present study started as an attempt to test an alternative account of Carpenter's results. However, it turned into a series of conceptual (Experiment 1) and direct (Experiment 2 and 3) replications conducted with online samples. The results of these online replications were compared with those of similar existing laboratory experiments through small-scale meta-analyses. The results showed that (1) the magnitude of the raw mediator testing effect advantage is comparable for online and laboratory experiments, (2) in both online and laboratory experiments the magnitude of the raw mediator testing effect advantage is smaller than in Carpenter's original experiment, and (3) the testing effect for related cues varies considerably between online experiments. The variability in the testing effect for related cues in online experiments could point toward moderators of the related cue short-term testing effect. The raw mediator testing effect advantage is smaller than in Carpenter's original experiment.

X Demographics

X Demographics

The data shown below were collected from the profile of 1 X user who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 31 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United States 1 3%
Unknown 30 97%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 4 13%
Student > Doctoral Student 3 10%
Student > Bachelor 3 10%
Student > Postgraduate 3 10%
Researcher 2 6%
Other 6 19%
Unknown 10 32%
Readers by discipline Count As %
Psychology 15 48%
Medicine and Dentistry 2 6%
Nursing and Health Professions 1 3%
Social Sciences 1 3%
Arts and Humanities 1 3%
Other 0 0%
Unknown 11 35%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 02 June 2016.
All research outputs
#21,264,673
of 23,881,329 outputs
Outputs from BMC Psychology
#813
of 866 outputs
Outputs of similar age
#298,540
of 342,361 outputs
Outputs of similar age from BMC Psychology
#15
of 15 outputs
Altmetric has tracked 23,881,329 research outputs across all sources so far. This one is in the 1st percentile – i.e., 1% of other outputs scored the same or lower than it.
So far Altmetric has tracked 866 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 18.2. This one is in the 1st percentile – i.e., 1% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 342,361 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 1st percentile – i.e., 1% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 15 others from the same source and published within six weeks on either side of this one. This one is in the 1st percentile – i.e., 1% of its contemporaries scored the same or lower than it.