Going Manhattan Project on Science’s Ass: You Get the Science You (Don’t) Pay For

I’ve argued before that the funding structure of science hurts our ability to conduct experiments at the appropriate scale, and that this should be fixed by ‘going Manhattan Project on its ass.’ Here’s what I mean (boldface added):

What I’ll propose… is that this dysfunctional system is derived from the emphasis NIH and other funding agencies place on R01 grants (or R01-‘like’ grants if NIH isn’t the funding agency). By R01 grants, I mean grants that are budgeted at less than $500,000 per year–and usually less–and awarded to one main PI, although there may be a few other Co-PIs. To fix this, NIH (and other funding agencies) should be funding larger collaborative projects–the dreaded ‘Big Science’–and downplaying R01s. (Note: I will refer to NIH and R01s as shorthand, even though the same types of grants are available at other funding agencies–I’m trying to avoid lots of alphabet soup). The R01 grants should function more as demonstration or proof-of-principle grants.

While there are risks to this strategy (discussed here), underfunded science often doesn’t move fields ahead. Which brings me to this excellent piece by Kevin Mitchell (boldface mine):

A few days ago there was a minor Twitterstorm over a particular paper that claimed to have found an imaging biomarker that was predictive of some aspect of outcome in adults with autism….

The reason for my cynicism is twofold: first, the study was statistically under-powered, and such studies are theoretically more likely to generate false positives. Second, and more damningly, there have been literally hundreds of similar studies published using neuroimaging measures to try and identify signatures that would distinguish between groups of people or predict the outcome of illness. For psychiatric conditions like autism or schizophrenia I don’t know of any such “findings” that have held up. We still have no diagnostic or prognostic imaging markers, or any other biomarkers for that matter, that have either yielded robust insights into underlying pathogenic mechanisms or been applicable in the clinic.

There is thus strong empirical evidence that the small sample, exploratory, no replication design is a sure-fire way of generating findings that are, essentially, noise….

This brings me back to the reaction on Twitter to the criticism of this particular paper. A number of people suggested that if neuroimaging studies were expected to have larger samples and to also include replication samples, then only very large labs would be able to afford to carry them out. What would the small labs do? How would they keep their graduate students busy and train them?

I have to say I have absolutely no sympathy for that argument at all, especially when it comes to allocating funding. We don’t have a right to be funded just so we can be busy. If a particular experiment requires a certain sample size to detect an effect size in the expected and reasonable range, then it should not be carried out without such a sample. And if it is an exploratory study, then it should have a replication sample built in from the start – it should not be left to the field to determine whether the finding is real or not.

You might say, and indeed some people did say, that even if you can’t achieve those goals, because the lab is too small or does not have enough funding, at least doing it on a small scale is better than nothing.

Well, it’s not. It’s worse than nothing.

Such studies just pollute the literature with false positives – obscuring any real signal amongst a mass of surrounding flotsam that future researchers will have to wade through. Sure, they keep people busy, they allow graduate students to be trained (badly), and they generate papers, which often get cited (compounding the pollution). But they are not part of “normal science” – they do not contribute incrementally and cumulatively to a body of knowledge.

We are no further in understanding the neural basis of a condition like autism than we were before the hundreds of small-sample/exploratory-design studies published on the topic. They have not combined to give us any new insights, they don’t build on each other, they don’t constrain each other or allow subsequent research to ask deeper questions. They just sit there as “findings”, but not as facts.

We get the science we are willing to fund. But that’s not the only problem. There is a tension between top-down direction–which essentially means funding allocation–and investigator-driven initiatives. We don’t want to kill off investigator-driven research–not at all. But, in a zero-sum funding environment, we are allocating a lot of resources to underpowered studies that aren’t helping the field progress (even if individual researchers are progressing quite nicely in careers).

This is neither structurally nor politically sustainable.

Happy Monday.

This entry was posted in Funding, Statistics. Bookmark the permalink.