↓ Skip to main content

The testing effect for mediator final test cues and related final test cues in online and laboratory experiments

Overview of attention for article published in BMC Psychology, May 2016
Altmetric Badge

Mentioned by

twitter
1 tweeter

Citations

dimensions_citation
6 Dimensions

Readers on

mendeley
20 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
The testing effect for mediator final test cues and related final test cues in online and laboratory experiments
Published in
BMC Psychology, May 2016
DOI 10.1186/s40359-016-0127-2
Pubmed ID
Authors

Leonora C. Coppens, Peter P. J. L. Verkoeijen, Samantha Bouwmeester, Remy M. J. P. Rikers

Abstract

The testing effect is the finding that information that is retrieved during learning is more often correctly retrieved on a final test than information that is restudied. According to the semantic mediator hypothesis the testing effect arises because retrieval practice of cue-target pairs (mother-child) activates semantically related mediators (father) more than restudying. Hence, the mediator-target (father-child) association should be stronger for retrieved than restudied pairs. Indeed, Carpenter (2011) found a larger testing effect when participants received mediators (father) than when they received target-related words (birth) as final test cues. The present study started as an attempt to test an alternative account of Carpenter's results. However, it turned into a series of conceptual (Experiment 1) and direct (Experiment 2 and 3) replications conducted with online samples. The results of these online replications were compared with those of similar existing laboratory experiments through small-scale meta-analyses. The results showed that (1) the magnitude of the raw mediator testing effect advantage is comparable for online and laboratory experiments, (2) in both online and laboratory experiments the magnitude of the raw mediator testing effect advantage is smaller than in Carpenter's original experiment, and (3) the testing effect for related cues varies considerably between online experiments. The variability in the testing effect for related cues in online experiments could point toward moderators of the related cue short-term testing effect. The raw mediator testing effect advantage is smaller than in Carpenter's original experiment.

Twitter Demographics

The data shown below were collected from the profile of 1 tweeter who shared this research output. Click here to find out more about how the information was compiled.

Mendeley readers

The data shown below were compiled from readership statistics for 20 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United States 1 5%
Unknown 19 95%

Demographic breakdown

Readers by professional status Count As %
Student > Postgraduate 3 15%
Student > Bachelor 3 15%
Student > Ph. D. Student 3 15%
Professor > Associate Professor 2 10%
Student > Doctoral Student 2 10%
Other 4 20%
Unknown 3 15%
Readers by discipline Count As %
Psychology 11 55%
Medicine and Dentistry 2 10%
Nursing and Health Professions 1 5%
Social Sciences 1 5%
Arts and Humanities 1 5%
Other 0 0%
Unknown 4 20%

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 02 June 2016.
All research outputs
#6,796,943
of 7,847,043 outputs
Outputs from BMC Psychology
#162
of 174 outputs
Outputs of similar age
#224,440
of 269,354 outputs
Outputs of similar age from BMC Psychology
#16
of 17 outputs
Altmetric has tracked 7,847,043 research outputs across all sources so far. This one is in the 1st percentile – i.e., 1% of other outputs scored the same or lower than it.
So far Altmetric has tracked 174 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 16.7. This one is in the 1st percentile – i.e., 1% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 269,354 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 1st percentile – i.e., 1% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 17 others from the same source and published within six weeks on either side of this one. This one is in the 1st percentile – i.e., 1% of its contemporaries scored the same or lower than it.