What happened that the journal Psychological Science published a paper with no identifiable strengths?
Statistical Modeling, Causal Inference, and Social Science 2013-05-20
The other day we discussed that paper on ovulation and voting (you may recall that the authors reported a scattered bunch of comparisons, significance tests, and p-values, and I recommended that they would’ve done better to simply report complete summaries of their data, so that readers could see the comparisons of interest in full context), and I was thinking a bit more about why I was so bothered that it was published in Psychological Science, which I’d thought of as a serious research journal. My concern isn’t just that that the paper is bad—after all, lots of bad papers get published—but rather that it had nothing really going for it, except that it was headline bait. It was a survey done on Mechanical Turk, that’s it. No clever design, no clever questions, no care in dealing with nonresponse problems, no innovative data analysis, no nothing. The paper had nothing to offer, except that it had no obvious flaws. Psychology is a huge field full of brilliant researchers. Its top journal can choose among so many papers. To pick this one, a paper that had nothing to offer, that seems to me like a sign of a serious problem.
A good study does not need to be methodologically original, but it should be methodologically sound. When we do surveys we worry about nonresponse. To take a few hundred people off MTurk and not even look for possible nonresponse bias, this is not serious.
But, again, it’s not so much that this paper was flawed, as that it had nothing much positive to offer.
Just to be clear: I’m really really really really not trying to censor such work, and I’m really really really really not saying this work should not be published. What I’m saying is that the top journal in a field should not be publishing such routine work. They should be publishing the best quality research, not just random things that happen to slip through the cracks.
I mean, sure, the referees should’ve caught the problems with that paper. But I blame the editors for even considering publication. Even without noticing the paper’s methodological flaws, it was nothing special.
And, once you decide to start publishing mediocre papers in your top journal, you’re asking for trouble. You’re encouraging more of the same.
To clarify, let me compare this with some other high-profile examples:
- Christakis and Fowler’s study of the contagion of obesity. This was published in top journals and was later found to have some serious methodological issues. But it’s not a mediocre work. They had a unique dataset, a new idea, and some new methods of data analysis. OK, they made some mistakes, but I can’t fault a leading journal for publishing this work. It has a lot of special strengths.
- Bem’s paper claiming to demonstrate ESP. OK, I wouldn’t have published this one. But I can see where the journal was coming from on this. If the results had held up, it would’ve been the scientific story of the decade, and the journal didn’t want to miss out. The editors didn’t show the best judgment here, but their decision was understandable.
- Kanazawa’s papers of schoolyard evolutionary biology. I’ve written about the mistakes here, and this work has a lot of similarities to the ovulation-and-voting study. The difference is that Kanazawa’s papers were published in a middling place—the Journal of Theoretical Biology—not in a top journal of their field. Don’t get me wrong, JTB is respectable, but it’s in the middle of the pack. It’s not expected that they are publishing the best of the best.
- Hamilton’s paper in the American Sociological Review, claiming that college students get worse grades if their parents pay. This paper had a gaping hole (not adjusting for the selection effect arising from less well-funded students dropping out) and I think it was a mistake for it to be published as is—but that’s just something the reviewers didn’t catch. On the plus side, Hamilton’s paper was thoughtful and had some in-depth analysis of large datasets. It had mistakes, but it had strengths too. It was not a turn-the-crank, run-an-online-survey-and-spit-out-p-values job.
My point is that, in all these cases of the publication of flawed work (and one could add the work of Mark Hauser and Bruno Frey as well), the published papers either had clear strengths or else were not published in top journals. When an interesting, exciting, but flawed paper (such as those by Bem, Hauser, etc) is published in a top journal, that’s too bad, but it’s understandable. When a possibly interesting paper (such as those by Kanazawa) is published in an OK journal, that makes sense too. It would not make sense to demand perfection. But when a mediocre paper (which also happens to have serious methodological flaws) is published in a top journal, there’s something seriously wrong going on. There are lots of things that can make a research paper special, and this paper had none of those things (unless anything combining voting and sex in an election year is considered special).
P.S. Let me emphasize that my goal here is not to pile on and slam the ovulation and voting paper. In fact, I’ve refrained linking to the paper here, just to give the authors a break. They did a little study that happened to be flawed. That’s no big deal. I’ve done lots of little studies that happened to be flawed, and sometimes my flawed work gets published. I’m not criticizing the authors for making some mistakes. I hope they can do better next time. I’m criticizing the journal for publishing a mediocre paper with little to offer. That’s not just a retrospective mistake; it seems like a problem with their policies that they would think that such an unremarkable paper could even be seriously considered for publication in the top journal of their field.
The post What happened that the journal Psychological Science published a paper with no identifiable strengths? appeared first on Statistical Modeling, Causal Inference, and Social Science.