She sent a letter pointing out problems with a published article, the reviewers agreed that her comments were valid, but the journal didn’t publish her letter because “the policy among editors is not to accept comments.”
Statistical Modeling, Causal Inference, and Social Science 2021-07-08
The journal in question is called The Economic Journal. To add insult to injury, the editor wrote the following when announcing they wouldn’t publish the letter:
My [the editor’s] assessment is that this paper is a better fit for a field journal in education.
OK, let me get this straight. The original paper, which was seriously flawed, was ok for Mister Big Shot Journal. But a letter pointing out those flaws . . . that’s just good enough for a Little Baby Field Journal.
That doesn’t make sense to me. I mean, sure, when it comes to the motivations of the people involved, it makes perfect sense: their job as journal editors is to give out gold stars to academics who write the sorts of big-impact papers they want to publish; to publish critical letters would devalue these stars. But from a scientific standpoint, it doesn’t make sense. If the statement, “Claim X is supported by evidence Y,” was considered publishable in a general-interest journal, then the statement “Claim X is not supported by evidence Y” should also be publishable.
It’s tricky, though. It only works if the initial, flawed, claim was actually published. Consider this example:
Scenario A: – A photographer disseminates a blurry picture and says, “Hey—evidence of Bigfoot!” – The Journal of the Royal Society of Zoology publishes the picture under the title, “Evidence of Bigfoot.” – An investigator shows that this could well just be a blurry photograph of some dude in a Chewbacca suit. – The investigator submits her report to the Journal of the Royal Society of Zoology.
Scenario B: – A photographer disseminates a blurry picture and says, “Hey—evidence of Bigfoot!” – An investigator shows that this could well just be a blurry photograph of some dude in a Chewbacca suit. – The investigator submits her report to the Journal of the Royal Society of Zoology.
What should JRSZ do? The answer seems pretty clear to me. In scenario A, JRSZ should publish the investigator’s report. In scenario B, they shouldn’t bother.
Similarly, the Journal of Personality and Social Psychology decided in its finite wisdom to publish that silly ESP article in 2011. As far as I’m concerned, this puts them on the hook to publish a few dozen articles showing no evidence for ESP. They made their choice, now they should live with it.
Background
Here’s the story, sent to me by an economist who would like anonymity:
I thought that you may find the following story quite revealing and perhaps you may want to talk about it in your blog.
In a nutshell, a PhD student, Claudia Troccoli, replicated a paper published in a top Economics journal and she found a major statistical mistake that invalidates the main results. She wrote a comment and sent it to the journal. Six months later she heard from the journal. She received two very positive referee reports supporting her critique, but the editor decided to reject the comment because he had just learned that the journal has an (unwritten) policy of not accepting comments. Another depressing element of the story is that the original paper was a classical example where a combination of lack of statistical power and multiple testing leads to implausible large effects (probably one order of magnitude of what one would have expected based on the literature). It is quite worrying that some editors in top economic journals are still unable to detect the pattern.
The student explained yesterday this story in twitter here and she has posted the comment, the editor letter, and referee reports here.
This story reminds me of my experience with the American Sociological Review a few years ago. They did not want to publish a letter of mine pointing out flaws in a paper they’d published, and their reason was that my letter was not important enough. I don’t buy that reasoning. Assuming the originally published paper was itself important (if not, the journal wouldn’t have published it), I’d say that pointing out the lack of empirical support for a claim in that paper was also important. Not as important as the original paper, which made many points that were not invalidated by my criticism—but, then again, my letter was much shorter than that paper! I think it had about the same amount of importance per page.
Beyond this, I think journals have the obligation to correct errors in the papers they’ve published, once those errors have been pointed out to them. Unfortunately, most journals seem to have a pretty strong policy not to do that.
As Trocolli wrote of the Economics Journal:
The behavior of the journal reflects an incentive problem. No journal likes to admit mistakes. However, as a profession, it is crucial that we have mechanisms to correct errors in published papers and encourage replication.
I agree. And “the profession” is all of science, not just economics.
Not too late for a royal intervention?
I googled *Economic Journal* and found this page, which says that it’s “the Royal Economic Society’s flagship title.” Kind of horrible of the Royal Economic Society to not correct its errors, no?
Perhaps the queen or Meghan Markle or someone like that could step in and fix this mess. Maybe Prince Andrew, as he’s somewhat of a scientific expert—didn’t he write something for the Edge Foundation once? I mean, what’s the point of having a royal society if you can’t get some royal input when needed? It’s a constitutional monarchy, right?
P.S. Werner sends in this picture of a cat that came up to him on a park bench at the lake of Konstanz in Germany and who doesn’t act like a gatekeeper at all.