Correcting for multiple comparisons in a Bayesian regression model
R-bloggers 2013-08-20
Joe Northrup writes:
I have a question about correcting for multiple comparisons in a Bayesian regression model. I believe I understand the argument in your 2012 paper in Journal of Research on Educational Effectiveness that when you have a hierarchical model there is shrinkage of estimates towards the group-level mean and thus there is no need to add any additional penalty to correct for multiple comparisons. In my case I do not have hierarchically structured data—i.e. I have only 1 observation per group but have a categorical variable with a large number of categories. Thus, I am fitting a simple multiple regression in a Bayesian framework. Would putting a strong, mean 0, multivariate normal prior on the betas in this model accomplish the same sort of shrinkage (it seems to me that it would) and do you believe this is a valid way to address criticism of multiple comparisons in this setting?
My reply: Yes, I think this makes sense. One way to address concerns of multiple comparisons is to do a simulation study conditional on some reasonable assumptions about the true effects, then see how often you end up obtaining statistically-significant but wrong claims. Or, if you want to put in even more effort, you could do several simulation studies, demonstrating that if the true effects are concentrated near zero but you assume a weak prior, that then the multiple comparisons issue would arise. That would be an interesting research direction, actually, to study how the multiple comparisons problems gradually arise as the prior becomes weaker and weaker. Such an analysis could aid our understanding by bridging between the classical and fully-informative Bayesian perspectives on multiple comparisons.
The post Correcting for multiple comparisons in a Bayesian regression model appeared first on Statistical Modeling, Causal Inference, and Social Science.