Over the last couple of weeks, Chris Mooney has written several interesting posts and articles about how human cognition affects the incorporation of evidence, especially scientific evidence (e.g., global warming) and what that means for politics. At the back of my mind have been nagging issues of the assumptions Mooney has been making. The assumptions have been crystallized by a post by Timothy Burke. I’m not sure that voters are as irrational as Mooney makes them out to be–overall, I think they’re far more ‘low information’ and focused on a few reasonable, if not perfect and sometimes exploitable, ‘rules of thumb’ that then guide how other secondary issues are interpreted (and, unfortunately, global warming is one of those issues).
Regarding the distrust of scientific evidence–which should be more accurately described as policy and political interpretations of scientific publications–there are actually some legitimate grounds for that distrust. First, scientists have done some extraordinarily sleazy things (e.g., the Tuskeegee experiments), not to mention declaring personal opinions to be scientifically-based, such as eugenics. And the popular movie depiction of scientists as either ammoral or immoral hasn’t helped either.
Burke notes that politics also has sullied the legitimacy of the scientific process (italics mine):
…the interests of political elites and institutional actors within modern states are demonstrably not identical in all or even most instances to the public good, and have a history in their own right of delivering policies which subsequently prove to have unintended, uneven, self-interested or destructive effects. When scientific knowledge gets caught up in that process, it becomes by definition less trustworthy or more worthy of skepticism than research which is not strongly directed towards justifying political or bureaucratic decisions. Add to this the intrusion of businesses and other private institutions with a strong interest in the production (or suppression) of particular kinds of scientific knowledge in relationship to the making of public policy. A historical perspective quickly demonstrates that many claims imbued with the authority of science, deployed in service to policy, have had powerful consequences but a very weak relationship to scientific truths.
Indeed. And the third point has to do with the intersection of the Decline Effect with propaganda and hype (often not by the researchers themselves). Because many studies–think of all of the health-oriented observational studies–are after-the-fact (post-hoc) tests of hypotheses, they shouldn’t be used to ask questions because they are often underpowered (e.g., not enough patients). This lack of power means that ‘convincing’ results which are spurious in a biological sense are often statistically significant:
Gelman (and he has some good slides over at his post) is claiming, correctly, that if the effect is weak and you don’t have enough samples (e.g., subjects enrolled in the study), any statistically significant result will be so much greater than what the biology would provide that it’s probably spurious. You might get lucky and have a spurious result that points in the same direction as the real phenomenon, but that’s just luck….
But the problem with real-life experimentation is that we often don’t have any idea what the outcome should look like. Do I have the Bestest Cancer Drug EVAH!, or simply one that has a small, but beneficial effect? If you throw in a desire, not always careerist or greedy (cancer does suck), to want or overestimate a potential large effect, the healthy habit of skepticism sometimes is observed in the breach. Worse, if you’re ‘data-mining’, you often have no a priori assumptions at all!
Note that this is not a multiple corrections or ‘p-value’ issue–the point isn’t that sometimes you’ll get a significant result by chance. The problem has to do with detection: with inadequate study sizes plus weak effects, anything you detect is spurious, albeit sometimes fortuitous.
I really don’t think we should underestimate just how damaging the constant barrage of bizarre claims that are then overturned by later studies is to the legitimacy of the scientific process. It’s incredibly damaging.
This isn’t to say that scientific results aren’t filtered by political leanings or circumstance (e.g., people might be more receptive to global warming concerns if unemployment was lower). And never underestimate the phenomenon of large swathes of the American public to rally around the notion of punching Dirty Hippies in the Face.
But criticism of science which is marshaled in support of policy isn’t necessarily irrational or a cognitive slip up, just as economic behavior isn’t often irrational in a cognitive sense.
By the way, telling people they’re being irrational will piss them off. Just saying.