Turning Off Comments? Effect Size and Why You Should Always Read the Primary Literature

Last week, Popular Science‘s decision to turn off comments got a lot of attention in the bloggysphere. What’s interesting is that this decision appears to be based on a misreading of the primary literature (boldface mine):

The quotes used in PopSci’s post are from an Op-Ed piece written by two of the four authors of a study called “The ‘‘Nasty Effect:’’ Online Incivility and Risk Perceptions of Emerging Technologies“, forthcoming from the Journal of Computer-Mediated Communication. The authors had 1183 adults read a blog post about risks and benefits related to nanotechnology. Some read a version that had uncivil comments (including, for example, personal attacks and name calling) and some read a version that had only civil comments in the comments section. They measured several characteristics of the participants in relation to the topic, such as their familiarity with nanotechnology, their confidence in their knowledge, and their prior support for the technologies. They also measured other characteristics such as readers’ usual reading behaviours, their religiousness, their age and their gender. The researchers used all of these variables (including which version of the comments the participants saw) to figure out which ones would explain how the readers would rate the risks of nanotechnology after reading the blog post and comments…

The first glaring issue is that even all of the variables put together (from age to prior beliefs, up to and including the civility of the comments) seem to have a small effect on the readers. All of these things put together only explain 17% of the differences in readers’ responses. So, 83% of what influenced the way readers responded to the article had nothing to do with any of things the researchers measured, including the civility of the comments. Following directly from that, it would be tough to tell from the Op-Ed that the civility of the comments had NO SIGNIFICANT DIRECT EFFECT on readers’ perceptions of nanotechnology. Here it is straight from the paper: “Our findings did not demonstrate a significant direct relationship between exposure to incivility and risk perceptions. Thus, our first hypothesis was not supported.”

So then the authors looked at the interaction effects–how a particular effect might influence a subset of the population (boldface mine):

They found two very small interaction effects. First, when they looked just at the group who read the uncivil comments, those who already supported nanotechnology expressed even lower risk than they did in the civil comment group and those who already didn’t support it, expressed even higher risks. So among those who already held strong views, the uncivil comments tended to polarize them a bit further. They found a similar relationship around religiousness, although I think it’s harder to explain this one. The authors seem to have come in with the assumption that religious people would generally perceive higher risk than those who are less religious. In the overall sample this didn’t come through though. Risk assessments were evenly spread among people with all religiousness scores. The only difference was that when the comments were uncivil the religiousness factor then (and only then) acted in the way they expected. So highly religious people reading uncivil comments expressed higher risk and vice versa. Both effects were very small though, increasing or decreasing risk perceptions by 1-2%.*

To be blunt, this is a big nothingburger: for a subset of people, risk perceptions increased by a tiny amount. An amount much smaller than the unaccounted variation in risk perception (83%). Barely perceptible, if you will.

Frankly, I don’t care what Popular Science does with its comments section, although this does call into question the validity of its other science reporting. But obsession over negligible effects has been really damaging–the first edition of The Bell Curve is an especially disturbing case.

It’s not enough to find a significant p-value. Ultimately, the strength of that significant effect has to be considered, especially once one starts making policy decisions with those results.

This entry was posted in Statistics. Bookmark the permalink.