“We conclude that apparent effects of growth mindset interventions on academic achievement are likely attributable to inadequate study design, reporting flaws, and bias.”

Statistical Modeling, Causal Inference, and Social Science 2025-12-11

Joshua Brooks points us to this research article by Brooke Macnamara and Alexander Burgoyne, “Do growth mindset interventions impact students’ academic achievement? A systematic review and meta-analysis with recommendations for best practices,” which states:

According to mindset theory, students who believe their personal characteristics can change–that is, those who hold a growth mindset–will achieve more than students who believe their characteristics are fixed. Proponents of the theory have developed interventions to influence students’ mindsets, claiming that these interventions lead to large gains in academic achievement. Despite their popularity, the evidence for growth mindset intervention benefits has not been systematically evaluated considering both the quantity and quality of the evidence. Here, we provide such a review by (a) evaluating empirical studies’ adherence to a set of best practices essential for drawing causal conclusions and (b) conducting three meta-analyses. When examining all studies (63 studies, N = 97,672), we found major shortcomings in study design, analysis, and reporting, and suggestions of researcher and publication bias: Authors with a financial incentive to report positive findings published significantly larger effects than authors without this incentive. Across all studies, we observed a small overall effect . . . which was nonsignificant after correcting for potential publication bias. No theoretically meaningful moderators were significant. When examining only studies demonstrating the intervention influenced students’ mindsets as intended . . . the effect was nonsignificant . . . When examining the highest-quality evidence . . . the effect was nonsignificant . . . We conclude that apparent effects of growth mindset interventions on academic achievement are likely attributable to inadequate study design, reporting flaws, and bias.

I haven’t read the paper, let alone the 63 cited studies, but I thought I’d do my part by getting this into the discussion.

We talked about earlier critical work by Mcnamara on growth mindset back in 2018, where I discussed how to think about effect sizes for such interventions.

My main message was that, if mindset interventions work, we’d still expect small average effects, because they won’t work for all students. As I wrote, “it’s a small effect in the context of any student, and of course it’s a small effect. It’s hard to get good grades, and there’s no magic way to get there!”

In one sense, my conclusion is negative on mindset interventions in that I’m saying we shouldn’t expect to see large effects, and any large effects that do show up are likely to be huge overestimates.

In another sense, my conclusion is positive on mindset interventions in that, given that any average effects will be small, the lack of statistically significant average effects in small or even moderately-large studies does not have to imply that mindset interventions don’t work; it just says that they only work in some settings, and individual effects will mostly be small.

Also relevant is this discussion we had a few years ago on mindset interventions with contributions from Russell Warne and David Yeager. Lots to chew on here, also this example helped form my thinking on varying treatment effects, leading to our causal quartets paper and some future lines of research.