Meta-analysis with a single study

Statistical Modeling, Causal Inference, and Social Science 2024-11-11

Erik van Zwet, Witold Więcek, and I write:

Effect sizes typically vary among studies of the same intervention. In a random effects meta-analysis, this variation is explicitly taken into account. However, when we have only one study, the heterogeneity remains hidden and unaccounted for. We are left to assume that the effect in the study is a fixed property of the treatment, and to expect that the same effect will be present in another study. However, this is usually not the case. We used the Cochrane Database of Systematic Reviews to estimate the distribution of the heterogeneity across more than 1,500 meta-analyses. We demonstrate how taking this information (and information about the effect sizes themselves) into account can help us better interpret the results of single trials.

And here’s what we find:

Our Bayesian “meta-analyses of single studies” perform much better than naively assuming non-varying effects. The prior on the heterogeneity results in better quantification of the uncertainty. The prior on the treatment effect reduces the mean squared error both for estimating the study-level and population-level effects by about 60% on average. Such a reduction is equivalent to more than doubling the sample size.

Everything is fully reproducible with the R code provided in a GitHub repository. The data are publicly available here.

Good stuff, I think. And these ideas should be applicable to other application areas as well.