“Does anyone actually expect meaningful insight to come from a study like this?”

Statistical Modeling, Causal Inference, and Social Science 2025-01-27

“This is one of those views which are so absurd that only very learned men could possibly adopt them.” — Bertrand Russell.

In a comment on a recent post on the many failings of a published paper that purported to demonstrate a form of mind-body healing, Raghu Parthasarathy wrote:

Does anyone actually expect meaningful insight into the important topic of wound healing to come from a study like this?

The quick answer is: Yes, Raghu, many people appear to actually expect meaningful insight into the important topic of wound healing to come from a study like this! These people include:

– The editor of the journal, Scientific Reports (“an open access journal publishing original research from across all areas of the natural sciences, psychology, medicine and engineering. . . . the 5th most-cited journal in the world,” published by the respected Nature publishing group).

– The associate editor and reviewers of the paper, whoever they are.

– The psychology department at Harvard University.

– The two authors of the paper, both of whom are at Harvard.

– Harvard University itself, which publicized this work on its promotional magazine, the Harvard Gazette.

– Also Harvard when it released that statement a few years ago from a psychology professor and a political science professor that “the replication rate in psychology is quite high–indeed, it is statistically indistinguishable from 100%.”

– Celebrity economist Steven Levitt, who hyped this work on his Freakonomics podcast.

– Quasi-celebrity physicist Sean Carroll, who hyped it on his Mindscape podcast. Amusingly, the url for that podcast is preposterousuniverse.com. “Preposterous Universe,” indeed.

– The formerly-respected magazine Scientific American, which published a lay summary by the authors.

– The who-knows-how-many-thousands-of-people trust the Nature brand, Harvard, Levitt, Carroll, or Scientific American–along with the millions of people who’ve been “primed” (sorry!) by NPR, PNAS, Harvard, etc., over the past couple of decades to believe this sort of exercise in the creation and mining of noise.

So, Raghu, the answer to your question is, Yeah, it seems that a lot of people do actually expect meaningful insight into the important topic of wound healing to come from a study like this.

To actually expect meaningful insight into the important topic of wound healing to come from a study like this, all you have to do is take the word of Steven Levitt, Sean Carroll, the Nature publication group, Harvard University, and Scientific American.

These are the same sorts of sources that, just a few years ago, were extolling the “masterpieces” of the now-discredited food behavior researcher Brian Wansink.

Of all these people, the one I’m most annoyed by is . . .

Among all the people mentioned above who actually expect meaningful insight into the important topic of wound healing to come from a study like this, the one I’m most annoyed by is Sean Carroll.

Don’t get me wrong–it’s not personal. I’ve never met the man, nor have I ever corresponded with him. It’s just . . . he’s a physicist. A physicist should know better! I’m not saying that physicists can never do anything dumb–all of us have the capability for cluelessness–I’m just surprised and disappointed to see a physicist being dumb in this particular credulous way.

Why am I not so disappointed in the other believers listed above? For each of them I can see an . . . ummm, not an “excuse,” exactly, but a reason for them to actually expect meaningful insight into the important topic of wound healing to come from a study like this.

Let me go through them in order:

– The journal editor sees tons of submissions, probably only glances at some of them, and if you’re a journal editor you have to trust your staff, in the same way that, if you’re a football coach, you have to give the ball to the QB, no matter what you personally believe regarding his skills.

– The executives at the Nature publishing group want a continuing flow of papers and a continuing flow of income. They’ll outsource their scientific judgment to the journal editor.

– The associate editor and reviewers quite possibly work in the same field as the authors of the paper, in which case they may have been trained to actually expect meaningful insight into the important topic of wound healing to come from a study like this. Not good, but understandable.

– The leaders of the psychology department at Harvard University is going to promote the work of its students and faculty: they don’t see it as their job to judge the quality of the work, except for rare occasions such as dissertation defenses and tenure reviews.

– The two authors of the paper presumably came into the project with the expectation that meaningful insight into the important topic of wound healing could come from a study like this. Publication and publicity and recognition would just deepen their conviction on the topic, and it would be a rare research team indeed that would drastically revise their beliefs based on outside criticism. It can happen, but you wouldn’t expect it.

– The role of the Harvard Gazette is to publicize the university, not to vet the work they’re promoting. If you’re a writer or editor for that sort of publication, I guess you have to be a believer or else set your doubts at the door.

– Other Harvard faculty have an understandable loyalty to the institution, so whether or not they really believe that the replication rate in psychology is “statistically indistinguishable from 100%,” I can see that they’re not going to look hard for problems in work published by their colleagues at the university.

– Steven Levitt has published lots of credulous things over the years under the Freakonomics banner. He quite possibly thinks of this healing stuff as silly stuff–recreational social science, as is were–to which he would not apply the same critical standards to which he might hold serious work in economics.

– Scientific American . . . I guess they’ll publish pretty much anything nowadays. They probably weren’t vetting the science of the article they published–after all, its authors came from Harvard!

– The thousands of trusting readers . . . what can I say? You have to put your trust somewhere. As noted above, decades of promotion of crap science in academia, the news media, and, more recently, social media (as discussed here) have numbed the audience.

So, we’re left with Sean Carroll. He’s not at Harvard, nor is he a psychologist, so he has no reason for institutional loyalty here. According to his website, his current interests include “foundational questions in quantum mechanics, spacetime, statistical mechanics, complexity, and cosmology, with occasional dabblings elsewhere.”

Physicists do very careful experiments, and they have very precise theories. So I’d expect them to be skeptical of the sorts of vague theories and sloppy experiments as in the above-discussed paper. But also I’m reminded of the stories that Michael Weissman has shared with me about the physics establishment endorsing really bad physics education research.

Which makes me wonder if many physicists, including Carroll, just have a very naive understanding of how social science works? Maybe they have an (unjustified) belief in the rigor of randomized experiments and hypothesis tests. Statistics textbooks are always slamming research that doesn’t involve random sampling or randomized treatment assignment, and they go on and on about how to calculate p-values, while talking very little or not at all about substantive theory and measurement. I could imagine an uninformed outsider such as Carroll getting the vague impression that randomization plus statistical significance equals rigorous science. Which would be scary, if that’s the reason.

(On the flip side, one thing I’ve seen from some mathematicians and physicists is a naivety in the opposite direction, for example saying that all observational social science statistics is wrong because there’s no randomization, or saying that all polls are bad unless they use pure random sampling, etc.)

Another explanation for Carroll’s promotion of this particular bit of junk science is a simple celebrities-stick-together thing. When an academic celebrity hears about another academic celebrity, there’s the temptation to do some backscratching: everybody wins, right? The half-fame of moderate academic celebrity is maybe more pleasant than dealing with annoying academic things such as faculty meetings, grading, and getting papers rejected by journals), and if you maintain that by giving softball interviews to other academic celebrities, that could be a small price to pay.

I’m still guessing that Carroll is not as gullible has he appears to be in that podcast, that if we were to put him on the spot he’d say something mellow like, “Hey, sure, I know it might be junk! But if it’s junk, it won’t replicate. That’s the self-correcting nature of science! I’m just open-minded and writing about fun stuff.” To which I’d reply: sure, that skill–being credulous and publicizing press releases and giving softball interviews to frauds–is a skill that 80 zillion NPR reporters and Ted talk executives already have. You have a Ph.D. in physics, and that’s how you want to spend your time?? Just seems weird to me. But I guess that it’s a lifestyle that some academics enjoy, and to get there, it helps to tell people what they want to hear.

Again, this is nothing personal about Sean Carroll, a man I’ve never met. He’s just an example of a larger phenomenon of interest, coming from Raghu’s seemingly-simple question, “Does anyone actually expect meaningful insight into the important topic of wound healing to come from a study like this?”