Better priors for AR, ARX, LTX, DR, MA, ARMA, and VAR models
Statistical Modeling, Causal Inference, and Social Science 2025-02-14
We arXived last year a paper The ARR2 prior: flexible predictive prior definition for Bayesian auto-regressions” with David Kohns, Noa Kallionen, Yann McLatchie, and I (Aki). Last week, we uploaded a revised version.
The idea of the paper is to have a prior that is predictively consistent, that is, if we add more terms (e.g., lags and covariates) to the model, the prior predictive distribution should not change substantially. This can be done by setting the prior on R-squared and using variance decomposition inspired by R2D2 prior. Using such priors allows using big models without fear of overfitting, there is no need to try to select the number of lags, and the information from the data is more efficiently used. If needed, a smaller model can be selected afterward considering the model selection as decision problem given the big model (see, e.g. Using reference models in variable selection and Advances in projection predictive inference).
The first version included the derivations for AR, ARX, LTX, and DR models. The revision adds results about implied priors on characteristic roots and partial autocorrelations, case study with quasi-cyclical behavior, and extensions to MA, ARMA, ARDL, and VAR models. The paper and associated git repo include Stan code.