X on JLP
Statistical Modeling, Causal Inference, and Social Science 2013-04-07
Christian Robert writes on the Jeffreys-Lindley paradox. I have nothing to add to this beyond my recent comments:
To me, the Lindley paradox falls apart because of its noninformative prior distribution on the parameter of interest. If you really think there’s a high probability the parameter is nearly exactly zero, I don’t see the point of the model saying that you have no prior information at all on the parameter. In short: my criticism of so-called Bayesian hypothesis testing is that it’s insufficiently Bayesian.
To clarify, I’m speaking of all the examples I’ve ever worked on in social and environmental science, where in some settings I can imagine a parameter being very close to zero and in other settings I can imagine a parameter taking on just about any value in a wide range, but where I’ve never seen an example where a parameter could be either right at zero or taking on any possible value. But such examples might occur in areas of application that I haven’t worked on.
I recommend Christian’s article. His perspective is different from mine. We are not in disagreement, we’re just looking at different aspects of the problem.