Posted on Categories Discover Magazine
Lumosity “brain training” games have no beneficial effects on cognition, according to a paper just published in the Journal of Neuroscience.
According to the authors, led by UPenn psychologist Joseph W. Kable, Lumosity “appears to have no benefits in healthy young adults above those of standard video games.”
In the study, 128 young adults were randomly assigned to either 10 weeks of Lumosity training, or a control condition: 10 weeks of playing normal, non-brain-based online videogames. The Lumosity group got much better at the Lumosity tasks over time, as shown by the Lumosity Performance Index:
However, neither Lumosity nor the control had any effect on the key outcome: executive function. Executive function refers to cognitive processes that regulate behaviour and help make reasoned (as opposed to impulsive) decisions. Kable et al. measured executive function with two decision-making tasks called delay discounting and risk sensitivity. Neither group improved on either task:
In other words, mastering the Lumosity tasks didn’t generalize to better executive function, even though many of the tasks were designed to do just that.
On other cognitive measures, it was a similar story. The Lumosity group showed no better performance than the videogame group on tests of working memory, sustained attention, and other cognitive abilities.
Is it possible that Lumosity was “training the brain” but that somehow this neural effect didn’t manifest as better performance? It doesn’t seem likely. Kable et al. used fMRI to measure brain activation in the participants while they were performing the executive function tasks. This revealed no difference in brain activity between the Lumosity and control groups either.
In my view this is a very well-designed study. The use of an active control (videogames) will have helped to reduce placebo and other non-specific influences such as the Hawthorne effect. Importantly, the control group were instructed to keep to the same training schedule (5 sessions of 30 minutes each, per week) as the Lumosity group. Many previous studies of “brain training” used less engaging control treatments, or none at all.
Kable et al.’s study does have one main weakness though, which is that it’s just not very big. With 64 participants per group, this study wasn’t tiny, but in 2017 it’s pretty small. Seven years ago there was a similar study with n=11,430 (although it didn’t examine Lumosity tasks.) Lumosity might therefore argue that Kable et al.’s study lacked the statistical power to detect the benefits of their product.
This study would probably have been much bigger if the fMRI component hadn’t been included. MRIs are expensive. I’m not sure the MRI adds that much to the study, in all honesty. Then again, the authors might have struggled to get a grant without promising to do some fMRI: the allure of neuroimaging can have a powerful effect on funders.
It’s also notable that Kable et al. studied young and healthy adults. It’s possible that people with existing cognitive impairment might benefit more from “brain training”.