Ed Yong has an excellent post about how many neuroscience studies are underpowered (not enough study subjects). These studies are unable to detect real effects, especially if the effects are moderate or weak. As bad, for a result to be statistically significant, it has to be such a ‘surprising’ result (i.e., strong effect) such that you should either consider that you screwed up the experiment or realize that this is just a fluke. But what struck me was this part:
But ultimately, the problem of underpowered studies ties into a recurring lament—that scientists face incentives that aren’t geared towards producing reliable results. Small, underpowered studies are great at producing what individuals and journals need—lots of new, interesting, significant and publishable results—but poor at producing what science as a whole needs—lots of true results. As long as these incentives continue to be poorly aligned, underpowered studies will remain a regular presence. “It would take a brave soul to do a tenth of the studies they were planning to do and just do a really big adequately powered one unless they’re secure enough in their career,” says Munafo.
This is why the team is especially keen that people who make decisions about funding in science will pay attention to his analysis. “If you have lots of people running studies that are too small to get a clear answer, that’s more wasteful in the long-term,” Munafo says. And if those studies involve animals, there is a clear ethical problem. “You end up sacrificing more animals than if you’d just run a single, large authoritative study in the first place. Paradoxically, I know people who’ve submitted animal grants that are powered to 95 percent but been told: ‘This is too much. You’re using too many animals.’”
In the past, I’ve argued that to really answer important questions, we have to increase the scale and scope of projects (what I call “go Manhattan Project on its ass“). But that can’t happen unless we rethink funding, as the size of an R01 grant (the typical faculty researcher grant) simply isn’t large enough. While $250,000 per year might seem big, typically about two-thirds of that goes straight into salaries. There just isn’t that much left over for large sample sizes.
In other words, the funding system is not designed to collect the large samples we need to do the science correctly. To seriously address this issue would require fewer and larger projects (even if we increased funding). For those who argue that this amounts to ‘picking winners and losers’, it’s worth remembering that we’re not picking a whole lot of winners right now.
It’s the funding, stupid.