"Proper scoring rules and linear estimating equations in exponential families" (Next Week at the Statistics Seminar)

Three-Toed Sloth 2014-04-01

Summary:

Attention conservation notice: Only of interest if you (1) care about estimating complicated statistical models, and (2) will be in Pittsburgh on Monday.

Steffen Lauritzen, "Proper scoring rules and linear estimating equations in exponential families"
Abstract: In models of high complexity, the computational burden involved in calculating the maximum likelihood estimator can be forbidding. Proper scoring rules (Brier 1950, Good 1952, Bregman 1967, de Finetti 1975) such as the logarithmic score, the Brier score, and others, induce natural unbiased estimating equations that generally lead to consistent estimation of unknown parameters. The logarithmic score corresponds to maximum likelihood estimation whereas a score function introduced by Hyvärinen (2005) leads to linear estimation equations for exponential families.
We shall briefly review the facts about proper scoring rules and their associated divergences, entropy measures, and estimating equations. We show how Hyvärinen's rule leads to particularly simple estimating equations for Gaussian graphical models, including Gaussian graphical models with symmetry.
The lecture is based on joint work with Philip Dawid, Matthew Parry, and Peter Forbes. For a recent reference see: P. G. M. Forbes and S. Lauritzen (2013), "Linear Estimating Equations for Exponential Families with Application to Gaussian Linear Concentration Models", arXiv:1311:0662.
Time and place: 4--5 pm on Monday, 7 April 2014, in 125 Scaife Hall.

Much of what I know about graphical models I learned from Prof. Lauritzen's book. His work on sufficienct statistics and extremal models, and their connections to symmetry and prediction, has shaped how I think about big chunks of statistics, including stochastic processes and networks. I am really looking forward to this.

(To add some commentary purely of my own: I sometimes encounter the idea that frequentist statistics is somehow completely committed to maximum likelihood, and has nothing to offer when that fails, as it sometimes does [1]. While I can't of course speak for every frequentist statistician, this seems silly. Frequentism is a family of ideas about when probability makes sense, and it leads to some ideas about how to evaluate statistical models and methods, namely, by their error properties. What justifies maximum likelihood estimation, from this perspective, is not the intrinsic inalienable rightness of taking that function and making it big. Rather, it's that in many situations maximum likelihood converges to the right answer (consistency), and in a somewhat narrower range will converge as fast as anything else (efficiency). When those fail, so much the worse for maximum likelihood; use something else that is consistent. In situations where maximizing the likelihood has nice mathematical properties but is computationally intractable, so much the worse for maximum likelihood; use something else that's consistent and tractable. Estimation by minimizing a well-behaved objective function has many nice features, so when we give up on likelihood it's reasonable to try minimizing some other proper scoring function, but again, there's nothing which says we must.)

[1]: It's not worth my time today to link to particular examples; I'll just say that from my own reading and conversation, this opinion is not totally confined to the kind of website which proves that rule 34 applies even to Bayes's theorem. ^

Link:

http://bactra.org/weblog/1099.html

From feeds:

Statistics and Visualization » Three-Toed Sloth

Tags:

Date tagged:

04/01/2014, 13:50

Date published:

04/01/2014, 13:50