What are the grand challenges in Bayesian computation?
Statistical Modeling, Causal Inference, and Social Science 2025-02-19
Here’s what Anirban Bhattacharya, Antonio Linero, and Chris Oates have to say:
Grand Challenge 1: Understanding the Role of Parametrisation
Grand Challenge 2: Community Benchmarks
Grand Challenge 3: Reliable Assessment of Posterior Approximations
They also mention “software support for the full Bayesian workflow,” which I agree is super-important.
I’d also add
– Scalable computing: being able to fit the models you want to fit, on large datasets
– Scalable modeling: being able to build bigger models as you get more data. We can easily expand models in some directions—adding predictors in a regression, adding terms to a spline model, adding layers to a neural net, etc.—but, in other ways, model expansion can be a challenge. Even for something as simple as multilevel regression and poststratification, we can quickly get tangled in how to program up interactions.
There are many more grand challenges in Bayesian computation. Feel free to put your ideas in the comments.
P.S. Vaguely relevant here is the article by Aki and me, “What are the most important statistical ideas of the past 50 years?” Kind of amazing that we wrote that article five years ago. Time flies!