I’m really getting tired of this sort of thing, and I don’t feel like scheduling it for September, so I’ll post it at 1 in the morning

Statistical Modeling, Causal Inference, and Social Science 2016-05-06

A couple days ago I received an email:

I’m a reporter for *** [newspaper], currently looking into a fun article about a recent study, and my old professor *** recommended I get in touch with you to see if you would give me a comment on the statistics in the study.

It’s a bit of fun, really, but I want to approach it with real scientific analysis, so your voice in the piece would be great.

The study is here: http://journals.plos.org/plosone/article?id=10.1371%2Fjournal.pone.0153419#sec010

I was eager to avoid doing any real work, so I read the paper and sent the journalist a reply:

The paper is cute but I think their conclusions go beyond their data. I have two concerns:

1. It’s not clear to what extent should we be willing draw general conclusions from Mechanical Turk participants to voters in general.

2. The difference between “significant” and “not significant” is not itself statistically significant (see here: http://www.stat.columbia.edu/~gelman/research/published/signif4.pdf) This is relevant in considering all their comparisons between candidates. For example, they write, “the results revealed that holding favorable views of three potential Republican candidates for US president (Ted Cruz, Marco Rubio, and Donald Trump) was positively related to judging bullshit statements as profound. In contrast, non-significant relations were observed for the three Democratic candidates (Hillary Clinton, Martin O’Malley, and Bernie Sanders).” But these differences could be explained by noise.

3. Multiple comparisons. The data are open-ended. They compared Democrats to Republicans but they could’ve compared male to female candidates, and they also could’ve looked at age and sex of respondents. So it’s not a surprise that if they looked through these data, they could find statistically significant comparisons (see here: http://www.stat.columbia.edu/~gelman/research/published/ForkingPaths.pdf)

That was all, but then I just received an email from Dan Kahan pointing to a long post by Asheley Landrum attacking the study on various grounds, including the points above.

I’m assuming the study was a parody (consider the paper’s title!) so I think Landrum may be going a bit over the top in the criticisms. Anyway, since this was all out there I thought I’d share my quick take above.

For the general issue of how reporters can write about this sort of science paper, see the templates here.

The post I’m really getting tired of this sort of thing, and I don’t feel like scheduling it for September, so I’ll post it at 1 in the morning appeared first on Statistical Modeling, Causal Inference, and Social Science.