7 steps to junk science that can achieve worldly success

Statistical Modeling, Causal Inference, and Social Science 2025-01-17

More than a decade after the earthquake that was the replication crisis (for some background, see my article with Simine Vazire, Why did it take so many decades for the behavioral sciences to develop a sense of crisis around methodology and replication?), it is frustrating to see junk science still being published, promoted, and celebrated, even within psychology, the field that was at the epicenter of the crisis.

The crisis continues

An example that I learned about recently was an article out of Harvard, Physical healing as a function of perceived time, published in 2023 and subsequently promoted in the news media, that claimed to demonstrate that healing of bruises could be sped or slowed by manipulating people’s subjective sense of time. All things are possible, and never say never, but, yeah, this paper offered no good evidence for its extraordinary claims. It was standard-issue junk science: a grabby idea, a statistically significant p-value extracted from noisy data, and big claims.

Someone pointed me to this paper, and for some reason that I can no longer remember, Nick Brown and I decided to figure out exactly what went wrong with it. We published our findings in this article, How statistical challenges and misreadings of the literature combine to produce unreplicable science: An example from psychology, which will appear in the journal Advances in Methods and Practices in Psychological Science.

In short, the published article was flawed in two important ways, first in its statistical analysis (see section 2.4 of our paper, where we write, “We are skeptical that this study reveals anything about the effect of perceived time on physical healing, for four reasons”) and second in its interpretation of its cited literature (see section 3 of our paper, where we write, “Here we discuss three different examples of this sort of misinterpretation of the literature cited in the paper under discussion”).

I don’t have any particular interest in purported mind-body healing, but Nick and I went to the trouble to shepherd our article through the publication process, with two goals in mind: – Providing an example of how we, as outsiders, could look carefully at a research article and its references and figure out what went wrong. This is important, because it’s pretty common to see papers that make outlandish claims but seem to be supported by data and the literature. – Exploring what exactly goes wrong–in this case, it was a mis-analysis of a complex data structure, researcher degrees of freedom in decisions of what to report, and multiple inaccurate summaries of the literature.

What does it take for junk science to be successful?

All this got me thinking about what it takes for researchers to put together a successful work of junk science in the modern era, which is the subject of today’s post.

Before going on, let me emphasize that I have no reason to suspect misconduct on the part of the authors of the paper in question. It’s a bad paper, and it’s bad science, but that happens given how people are trained, and given the track record of what gets published in leading journals (Psychological Science, PNAS), what gets rewarded in academia, and what gets publicity from NPR, Ted, Freakonomics, and the like. As we’ve discussed many times, you can do bad science without being a bad person and without committing what would usually be called research misconduct. (I actually don’t think that bad data analysis and inaccurate description of the literature would usually be put in the “research misconduct” category.)

This is also why I’m not mentioning the authors’ names here. The names are no secret–just click on the above link and the paper is right there!–I’m just not including them in this post, so as to emphasize that I’m writing here about the process of bad science and its promotion; it’s not about these particular authors (or any particular authors).

7 steps to junk science

So here they are, 7 things that allow junk science to thrive:

1. Bad statistical analysis. Statistics is hard; there are a lot of ways to make mistakes, and often these mistakes can lead to what appears to be strong evidence.

2. Researcher degrees of freedom. Garden of forking paths. As always, the problem is not with the forking paths–there really are a lot of ways to collect, code, and analyze data!–but rather with selection in what is reported. As Simmons et al. (2011) unforgettably put it, “undisclosed flexibility in data collection and analysis allows presenting anything as significant.” And, as Loken and I emphasized in our paper on forking paths, “undisclosed flexibility” could be undisclosed to the authors themselves: the problem is with data-dependent analysis choices, even if the data at hand were analyzed only once.

3. Weak or open-ended substantive theory. Theories such as evolutionary psychology, embodied cognition, and mind-body healing are vague enough to explain just about anything. As Brown and I wrote in our above-linked article, “The authors refer to ‘mind–body unity’ and ‘the importance of psychological factors in all aspects of health and wellbeing,’ and we would not want to rule out the possibility of such an effect, but no mechanisms are examined in this study, so the result seems at best speculative, even taking the data summaries at face value. During the half hour of the experimental conditions, the participants were performing various activities on the computer that could affect blood flow, and these activities were different in each condition . . . there are many alternative explanations for the results which we find just as scientifically plausible as the published claim.”

4. Inaccurate summaries of the literature. This is a big deal, a huge deal, and something we don’t talk enough about.

It’s a lot to expect the journal editors and reviewers to check citations and literature reviews. It’s your job as an author to read and understand the work you’re citing before using those papers to make unsupported claims. For example, don’t make the claim, “If a person who does not exercise weighed themselves, checked their blood pressure, took careful body measurements, wrote everything down, maintained their same diet and level of physical activity, and then repeated the same measures a month later, few would expect exercise-like improvements. But in a study involving hotel housekeepers, that is effectively what the researchers found,” if you’re citing a study that does not support this claim.

5. Institutional support. Respectable journals are willing to publish articles that make outlandish claims based on weak evidence. Respected universities give Ph.D.’s for such work. Again, I’m not suggesting malfeasance on the part of the authors; they’re just playing by the rules that they’ve learned.

6. External promotion. This work was featured in Freakonomics, Scientific American, and other podcasts and news outlets (see here and here). This external promotion has three malign effects: – Most directly, it spreads the (inaccurate) word about the bad research. – The publicity also provides an incentive for people to more sloppy work that can yield these sorts of strong claims from weak evidence. – Also, publicity for sloppy, bad science can crowd out publicity and reduce the incentives to do careful, good science.

7. Celebrity culture. This is a combination of items 5 and 6 above: many celebrity academic and media figures prop each other up. Some of it’s from converging interests, as when the Nudgelords presented the work of Brian Wansink as “masterpieces,” but often I think it’s more just a sense that all these media-friendly scientists and podcasters and journalists feel that they’re part of some collective project of science promotion, and from that perspective it doesn’t really matter if the science is good or bad, as long as it’s science-like, by their standards.

Anyway, this continues to bug the hell out of me, which is why I keep chewing on it and writing about it from different angles. I’m glad that Nick and I wrote that paper–it took some effort to track down all the details and express ourselves both clearly and carefully.