(an issue raised by a recent failed-science story:) The distinction between research that could be valid but just happens to be wrong, and junk science that never had a chance
R-bloggers 2022-12-24
Greg Meyer writes:
An item perhaps of interest in regard to post-publication review, in this case in math/logic. A paper claimimg to prove a theorem posted posted online on 25 October in Studia Logica had its proof refuted in on 28 October in an online discussion at MathOverflow by David Roberts. The editor “retracted” the paper on 30 October (along with an earlier paper by the same author; I’m not sure what the issue was with the earlier paper). (I’ve put “retracted” in quotes because it’s a print journal, and the refuted proof did not, and will not, appear in print, so it’s a retraction of a not-yet-fully-published paper.)
One of the commenters at MathOverflow, Alec Rhea, wrote, “MathOverflow seems to be taking on a role as the final stage of review.” I thought this might be an interesting case study in extended peer review.
This is indeed interesting to me. Not the math part—I could care less about the twin primes conjecture—and not even the retraction part, but something else.
There are two kinds of retractions. In the first kind, some research is done that could’ve worked, but it turns out that something was done wrong, and it didn’t really work out. In the second kind, the research never even had a chance of being correct, and the closer you look, the more you realize that the claims were bogus: not just wrong, but lacking whatever it might take to possibly work.
This doesn’t have to do with fraud. It can be simple incompetence. Or, to put it more charitably, lack of understanding of sophisticated ideas. An example is that beauty-and-sex-ratio research we discussed many years ago. Big claims, published in a legit biology journal and hyped by legit media outlets, but it never had a chance to work. The researcher who did this was, I assume, naively under the impression that statistical significance implied a good signal-to-noise ratio. He was completely wrong, just as the people making claims about elections and longevity were completely wrong—not necessarily wrong on the directions of their substantive claims (who knows?) but wrong in their belief that they’d discovered or proved or found good evidence for their claims from their data.
One characteristic of these never-had-a-chance research projects is that they can seem reasonable to casual readers, while experts can easily see that the work is hopeless.
And that brings us to this math problem. As Meyer wrote, someone claimed to prove a theorem and it turns out that he didn’t. In this case, the accurate framing of the story is not, “It looked like the Twin Prime Conjecture might have been proven but it turned out the proof was flawed.” Rather, it’s “Someone who never had a chance of proving the Twin Prime Conjecture deluded himself and some reviewers into thinking he would be able to do it.” If you follow the thread, you’ll see that the author of that paper never had a chance, any more than I’d have a chance to row a boat across the Atlantic.
I was reminded of this example from a couple of years ago of someone claiming to prove that sqrt(2) is a normal number. Again, dude never had a chance.
This is different, for example, from that disgraced primatologist. He misrepresented his data, but he was qualified, right? He could possibly have learned something important about monkeys in his experiments; he just didn’t. It just struck me that when writing about replication failures or science failures we should distinguish between the two scenarios. The distinction is not sharp, but I think the general point is relevant.