Before I got into genomics, I spent some time in science and health policy. On a couple occasions, I was invited to participate in a round table/white paper thingee where we were supposed to offer suggestions to NIH and other funding agencies. We would make recommendations, program officers would agree with those recommendations, and then reviewers would… fund the same old shit.
That’s why I’ve advocated more specific RFAs that allow NIH to set targeted priorities:
My experience has been that with very targeted calls for proposals, there are far fewer proposals submitted, and it’s much easier to flat out reject them because many proposals are not germane to the funding objectives. This means that NIH program officers have to be far more active in defining specific research objectives than they have been–to a considerable extent, NIH is placing this responsibility on reviewers who often lack knowledge of the larger institutional objectives. That needs to be changed.
That’s why I was interested in NIH’s new scoring system that would set up scores in such a way that program officers (and the council) would have an opportunity to use their discretion:
With less information–that is, less opportunities to discriminate on pretty meaningless stuff in terms of the scientific merit and capabilities–I think there will be more grants that are very tightly bunched together, meaning that either program officers and the review council will select grants based on statistically ridiculous differences (although one could argue that’s already happening), or else funding decisions will shift somewhat to NIH officials.
I’m not entirely sure that’s a bad thing–if there’s a downside to the R01 mechanism as currently construed, there’s little accountability for panels that choose grantees stupidly.
Well, ScienceBlogling DrugMonkey crunches the numbers and concludes that’s exactly what’s happening:
I have to say I’m in favor of this approach and the outcome. I feel that in the past POs were all too willing to act as if they believed, and likely did actually believe, that “small and likely meaningless mathematical differences” in score were producing bedrock quality distinctions. I felt that this allowed them to take the easy way out when it came to sorting through applications for funding. Easy to ignore bunny hopper bias which resulted in 10X of the same-ol, same-ol projects being funded. Easy to ignore career-stage bias. Easy to think that if HawtNewStuff really was all that great, of course it would get good scores. Etc.
I like that the POs are going to have to look at the tied applications and really think about them.
I agree completely. In her book ECONned: How Unenlightened Self Interest Undermined Democracy and Corrupted Capitalism (review coming soon!), Yves Smith makes the point that economics too often is what economists want it to be, not what the rest of us need it to be. Unlike many economists, however, we are funded by the public, and we need to be accountable to the public. By removing some decision making from study sections, and returning it to the NIH, that’s a very good step.
If you’re worried that this will ‘politicize’ science, well then, you might just have to sully yourself with political advocacy. You know, like citizens do.
And now, extra bonus video (which is tangentially appropriate):