That day in 1977 when Jerzy Neyman committed the methodological attribution fallacy.
Statistical Modeling, Causal Inference, and Social Science 2024-11-06
(Before going on, please read the last sentence of the P.P.S. below to put this post in context.)
Blake McShane points us to this 1977 article, “Frequentist Probability and Frequentist Statistics,” by Jerzy Neyman, the statistician who made fundamental contributions to the theory of sampling, experimentation, and statistical decision theory.
Neyman was a huge figure in our field, and even those of us who rarely use his methods recognize his importance. His article from 1977 gives me a pleasant feeling of nostalgia, as it reminds me of the classes in probability and statistics that I took in the early 1980s—perhaps not a surprise, given that one of my teachers was Grace Yang, who was a student of a student of Neyman.
Here, though, I don’t want to talk about statistical content but rather about a throwaway bit from the first page of Neyman’s article, where he writes, in response to remarks regarding the subjective Bayesian theory of Bruno de Finetti:
I [Neyman] feel a degree of amusement when reading an exchange between an authority in ‘subjectivistic statistics’ and a practicing statiscian, more or less to this effect:
The Authority: ‘You must not use confidence intervals; they are discredited!’
Practicing Statistician: ‘I use confidence intervals because they correspond exactly to certain needs of applied work.’
Neyman just committed what we call the methodological attribution fallacy, which is that the many useful contributions of a good statistical consultant, or collaborator, will often be attributed to the statistician’s methods or philosophy rather than to the artful efforts of the statistician himself or herself.
I thought about this fallacy a few years ago after reflecting upon how Don Rubin told me that scientists are fundamentally Bayesian (even if they do not realize it), in that in his experience they interpret uncertainty intervals Bayesianly; and reflecting on a story told to me by Brad Efron about how his scientific collaborators found permutation tests and p-values to be the most convincing form of evidence. The lesson I took from these examples (and from Neyman’s remark above) is that a variety of methodological approaches can help people solve real scientific problems and that each of us tends to come away from a collaboration or consulting experience with the warm feeling that our methods really work, and that they represent how scientists really think.
When writing about this general issue, I concluded that we all have to be careful about attributing too much from our collaborators’ and clients’ satisfaction with our methods.
It is not a criticism of Neyman’s ideas or his work to suggest that he was subject to the same bias of perspective that afflicted Rubin and Efron many years later.
P.S. From the blog archives here’s more on the big guy:
– In which I side with Neyman over Fisher
P.P.S. Just to emphasize one more time, because this is social media, and social media feeds on misunderstandings: Yes, I think Neyman made a mistake in what he wrote in that article, and I consider his mistake there to be an example of the methodological attribution fallacy. No, this does not mean I’m saying that Neyman is worse than other statisticians of his era in this respect, nor am I taking an anti-Neyman position in the classic (and, to me, annoying) “Fisher vs. Neyman” debate that was a major topic in statistics back in the 1970s, as well as before and after. Again, many of the greats have fallen for this fallacy—that’s one reason I find it interesting! When stupid people make mistakes, that’s interesting from a perspective of communication or “sociology.” When smart people make mistakes, that’s of direct intellectual interest, as it implies some deeper incoherence in their philosophies or worldviews. I point out Neyman’s error here in the same spirit that I pointed out the errors of Turing and Kahneman—not to score points against these culture heroes who made such important contributions to science, but to further learn from them, to learn from their missteps as well as their successes.