Here’s a useful response by Christakis to criticisms of the contagion-of-obesity claims

Statistical Modeling, Causal Inference, and Social Science 2024-09-08

Yesterday I posted an update on citations of the influential paper from 2007 by sociologist Nicholas Christakis and political scientist James Fowler, “The Spread of Obesity in a Large Social Network over 32 Years,” which concluded, “Network phenomena appear to be relevant to the biologic and behavioral trait of obesity, and obesity appears to spread through social ties.”

As I wrote yesterday, several other researchers had criticized that paper on methodological grounds, and in my post I characterized it as being “many times debunked” and expressed distress that the original claim seems to be regularly cited without reference to the published criticisms by economists Jason Fletcher and Ethan Cohen-Cole, mathematician Russ Lyons, political scientists Hans Noel and Brendan Nyhan, and statisticians Cosma Shalizi and Andrew Thomas.

That said, I am not an expert in this field. I have read the articles linked in the above post but have not kept track of the later literature.

In comments, Christakis shares his perspective on all this:

I [Christakis] think this post is an incomplete summary of the very carefully expressed claims in the original 2007 paper, and also an inaccurate summary of the state of the literature. You also may want to look at our original exchanges on this blog from years ago, and to our published responses to prior critiques, including to some of the decades-old critiques (often inaccurate) that you mention.[1]

Many papers have replicated our findings of social contagion with respect to obesity (and the various other phenomena discussed in our original suite of papers), and many papers have evaluated the early methods we used (based on generalized estimating equations) and have supported that approach.

For instance, analyses by various scholars supported the GEE approach, e.g., by estimating how large the effect of unobserved factors would have to be to subvert confidence in the results.[2],[3],[4],[5] Other papers supported the findings in other ways.[6],[7],[8],[9],[10] This does not mean, of course, that this GEE approach does not require various assumptions or is perfectly able to capture causal effects. This is one reason the 2007 paper described exactly what models were implemented, was judicious in its claims, and also proposed certain innovations for causal identification, including the “edge-directionality test.” The strengths and limitations of the edge-directionality test for causal identification have subsequently been explored by computer scientists,[11] econometricians,[12] statisticians,[13] and sociologists.[14]

Work by other investigators with other datasets and approaches has generally confirmed the 2007 findings. Pertinent work regarding weight and related behaviors is quite diverse, including everything from observational studies to experiments.[15],[16],[17],[18],[19],[20],[21],[22],[23] Of course, as expected, work has also confirmed the existence of homophily with respect to weight. Still other studies have used experimental and observational methods to confirm that one mechanism of the interpersonal spread of obesity might indeed be a spread of norms, as speculated in the 2007 paper.[24],[25],[26],[27]

Of course, methods to estimate social contagion with observational data regarding complex networks continue to evolve, and continue to require various assumption, as ours did in 2007. As before, I would love to see someone offer a superior statistical method to observational data. And I should also clarify that the public-use version of the FHS-Net data (posted at dbGap) is not the same version as the one we based our 2007 analyses on (a constraint imposed by the FHS itself, in ways documented there); however, the original data is available via the FHS itself (or at least was). At times, this difference in datasets explains why other analyses have reached slightly different conclusions than us.

In our 2007 paper, we also documented an association of various ego and alter traits to a geodesic depth of three degrees of separation. We also did this with respect to public goods contributions in a network of hunter-gatherers in Tanzania[28] and smoking in the FHS-Net.[29] Other observational studies have also noted this empirical regularity with respect to information,[30],[31] concert attendance,[32] or even the risk of being murdered.[33] We summarized this topic in 2013.[34]

Moreover, we and others have observed actual contagion up to three degrees of separation in experiments which absolutely excludes homophily or context as an explanation for the clustering.[35],[36],[37] For instance, Moussaid et al documented hyper-dyadic contagion of risk perception experimentally.[38] Another experiment found that the reach of propagation in a subjective judgment task “rarely exceeded a social distance of three to four degrees of separation.”[39] A massive experiment with 61 million people on Facebook documented the spread of voting behavior to two degrees of separation.[40] A large field experiment with 24,702 villagers in Honduras showed that certain maternal and child health behaviors likewise spread to at least two degrees of separation.[41] And a 2023 study involved 2,491 women household heads in 50 poor urban residential units in Mumbai documented social contagion, too.[42]

In addition, my own lab, in the span from as early as 2010 to as recently as 2024 has published many demanding randomized controlled field trials and other experiments documenting social contagion, as noted above. For instance, my group published our first experiment with social contagion in 2010,[43] as well as many other experiments involving social contagion in economic games using online subjects[44],[45],[46] often stimulating still other work.[47],[48],[49]

Many other labs, in part stimulated by our work, have conducted many other experiments documenting social contagion. The idea of using a network-based approach to exploit social contagion to disseminate an intervention – so as to change knowledge, attitudes, or practices at both individual and population levels – has been evaluated in a range of settings.[50],[51],[52],[53],[54],[55],[56],[57],[58]

Finally, other rigorous observational and experimental studies involving large samples and mapped networks have explored diverse outcomes in recent years, beyond the examples reviewed so far. For instance, phenomena mediated by online interactions include phone viruses,[59] diverse kinds of information,[60],[61],[62] voting,[63] and emotions.[64] In face-to-face networks, phenomena as diverse as gun violence in Chicago,[65] microfinance uptake in India,[66] bullying in American schools,[67] chemotherapy use by physicians,[68] agricultural technology in Malawi,[69] and risk perception[70] have been shown to spread by social contagion.

The above, in my view, is a fairer and more complete summary of the impact, relevance, and accuracy of our claims about obesity in particular and social contagion in general. Work in the field of social contagion in complex networks, using observational and experimental studies has exploded since we published our 2007 paper.

The list of references is at the end of Christakis’s comment.

Back in 2010 I wrote that this area is ripe for statistical development and also ripe for development in experimental design and data collection. As of 2024, the area is not just “ripe for development” in experimental design, data collection, and statistical analysis; there have also been many developments in all these areas, by Christakis, his collaborators, and other research groups, and my earlier post was misleading: just because I was ignorant of that followup literature, that’s not an excuse for me to act as if it didn’t exist.

One question here is how to think about the original Christakis and Fowler (2007) paper. On one hand, I remain persuaded by the critics that it made strong claims that were not supported by the data at hand. On the other hand, it was studying an evidently real general phenomenon and it motivated tons of interesting and important research.

Whatever its methodological issues, Christakis and Fowler (2007) is not like the ESP paper or the himmicanes paper or the ovulation-and-voting papers, say, whose only useful contributions to science were to make people aware of the replication crisis and motivate some interesting methodological work. One way to say this is that the social contagion of behavior is both real and interesting. I don’t think that’s the most satisfying way to put this—the people who study ESP, social priming, evolutionary psychology, etc., would doubtless say that their subject areas are both real and interesting too!—so consider this paragraph as a placeholder for a fuller investigation of this point (ideally done by someone who can offer a clear perspective than I can here).

In summary:

1. I remain convinced by the critics that the original Christakis and Fowler paper did not have the evidence to back up its claims.

2. But . . . that doesn’t mean there’s nothing there! In their work, Christakis and Fowler (2007) were not just shooting in the dark. They were studying an interesting and important phenomenon, and the fact that their data were too sparse to answer the questions they were trying to answer, well, that’s what motivates future work.

3. This work does not seem to me to be like various notorious examples of p-hacked literature such as beauty-and-sex-ratio, ovulation-and-clothing, mind-body-healing, etc., and I think a key difference is that the scientific hypotheses involving contagion of behavior are more grounded in reality rather than being anything-goes theories that could be used to explain any pattern in the data.

4. I was wrong to refer to the claim of contagion of obesity as being debunked. That original paper had flaws, and I do think that when it is cited, the papers by its critics should be cited too. But that doesn’t mean the underlying claims are debunked. This one’s tricky—it relates to the distinction between evidence and truth—and that’s why followups such as Christakis’s comment (and the review article that it will be part of) are relevant.

I want to thank Christakis again for his thoughtful and informative response, and I apologize for the inappropriate word “debunked.” I’ve usually been so careful over the years to distinguish evidence and truth, but this time I was sloppy—perhaps in the interest of telling a better story. I’ll try to do better next time.

P.S. I think that one problem here is the common attitude that a single study should be definitive. Christakis and Fowler don’t have that attitude—they’ve done lots of work in this area, not just resting their conclusions on one study—and I don’t have that attitude either. I’m often saying this, that (a) one study is rarely convincing enough to believe on its own, and, conversely, (b) just because a particular study has fatal flaws in its data, that doesn’t mean that nothing is there. We usually criticize the single-study attitude when researchers or journalists take one provisional result and run with it. In this case, though, I fell into the single-study fallacy myself by inappropriately taking the well-documented flaws of that one paper as evidence that nothing was there.

That all said, I’m sure that different social scientists have different views on social contagion, and so I’m not trying to present Christakis’s review as the final word. Nor is he, I assume. It’s just appropriate for me to summarize his views on the matter based on all this followup research he discusses and not have the attitude that everything stopped in 2011.