Assumption-Free High-Dimensional Inference
Normal Deviate 2013-12-20
Richard Lockhart, Jonathan Taylor, Ryan Tibshirani and Rob Tibshirani have an interesting paper about significance tests for the lasso. The paper will appear in The Annals of Statistics. I was asked to write a discussion about the paper. Here is my discussion. (I suggest your read their paper before reading my discussion.)
Assumption-Free High-Dimensional Inference: Discussion of LTTT Larry Wasserman
The paper by Lockhart, Taylor, Tibshirani and Tibshirani (LTTT) is an important advancement in our understanding of inference for high-dimensional regression. The paper is a tour de force, bringing together an impressive array of results, culminating in a set of very satisfying convergence results. The fact that the test statistic automatically balances the effect of shrinkage and the effect of adaptive variable selection is remarkable.
The authors make very strong assumptions. This is quite reasonable: to make significant theoretical advances in our understanding of complex procedures, one has to begin with strong assumptions. The following question then arises: what can we do without these assumptions?
1. The Assumptions
The assumptions in this paper — and in most theoretical papers on high dimensional regression — have several components. These include:
- The linear model is correct.
- The variance is constant.
- The errors have a Normal distribution.
- The parameter vector is sparse.
- The design matrix has very weak collinearity. This is usually stated in the form of incoherence, eigenvalue restrictions or incompatibility assumptions.
To the best of my knowledge, these assumptions are not testable when . They are certainly a good starting place for theoretical investigations but they are indeed very strong. The regression function can be any function. There is no reason to think it will be close to linear. Design assumptions are also highly suspect. High collinearity is the rule rather than the exception especially in high-dimensional problems. An exception is signal processing, in particular compressed sensing, where the user gets to construct the design matrix. In this case, if the design matrix is filled with independent random Normals, the design matrix will be incoherent with high probability. But this is a rather special situation.
None of this is meant as a criticism of the paper. Rather, I am trying to motivate interest in the question I asked earlier, namely: what can we do without these assumptions?
Remark 1 It is also worth mentioning that even in low dimensional models and even if the model is correct, model selection raises troubling issues that we all tend to ignore. In particular, variable selection makes the minimax risk explode (Leeb and Potscher 2008 and Leeb and Potscher 2005). This is not some sort of pathological risk explosion, rather, the risk is large in a neighborhood of 0, which is a part of the parameter space we care about.
2. The Assumption-Free Lasso
To begin it is worth pointing out that the lasso has a very nice assumption-free interpretation.
Suppose we observe where and . The regression function is some unknown, arbitrary function. We have no hope to estimate nor do we have licence to impose assumptions on .
But there is sound theory to justify the lasso that makes virtually no assumptions. In particular, I refer to Greenshtein and Ritov (2004) and Juditsky and Nemirovski (2000).
Let be the set of linear predictors. For a given , define the predictive risk
where is a new pair. Let us define the best, sparse, linear predictor (in the sense) where minimizes over the set . The lasso estimator minimizes the empirical risk over . For simplicity, I’ll assume that all the variables are bounded by (but this is not really needed). We make no other assumptions: no linearity, no design assumptions, and no models. It is now easily shown that
except on a set of probability at most .
This shows that the predictive risk of the lasso comes close to the risk of the best sparse linear predictor. In my opinion, this explains why the lasso “works.” The lasso gives us a predictor with a desirable property — sparsity — while being computationally tractable and it comes close to the risk of the best sparse linear predictor.
3. Interlude: Weak Versus Strong Modeling
When developing new methodology, I think it is useful to consider three different stages of development:
- Constructing the method.
- Interpreting the output of the method.
- Studying the properties of the method.
I also think it is useful to distinguish two types of modeling. In strong modeling, the model is assumed to be true in all three stages. In weak modeling, the model is assumed to be true for stage 1 but not for stages 2 and 3. In other words, one can use a model to help construct a method. But one does not have to assume the model is true when it comes to interpretation or when studying the theoretical properties of the method. My discussion is guided by my preference for weak modeling.
4. Assumption Free Inference: The HARNESS
Here I would like to discuss an approach I have been developing with Ryan Tibshirani. We call this: High-dimensional Agnostic Regresson Not Employing Structure or Sparsity, or, the HARNESS. The method is a variant of the idea proposed in Wasserman and Roeder (2009).
The idea is to split the data into two halves. and . For simplicity, assume that is even so that each half has size . From the first half we select a subset of variables . The method is agnostic about how the variable selection is done. It could be forward stepwise, lasso, elastic net, or anything else. The output of the first part of the analysis is the subset of predictors and an estimator . The second half of the data is used to provide distribution free inferences for the following questions:
- What is the predictive risk of ?
- How much does each variable in contribute to the predictive risk?
- What is the best linear predictor using the variables in ?
All the inferences from are interpreted as being conditional on . (A variation is to use only to produce and then construct the coefficients of the predictor from . For the purposes of this discussion, we use from .)
In more detail, let
where the randomness is over the new pair ; we are conditioning on . Note that in this section I have changed the definition of to be on the absolute scale which is more interpretable. In the above equation, it is understood that when . The first question refers to producing a estimate and confidence interval for (conditional on ). The second question refers to inferring
for each , where is equal to except that is set to 0. Thus, is the risk inflation by excluding . The third question refers to inferring
the coefficient of the best linear predictor for the chosen model. We call the projected parameter. Hence, is the best linear approximation to on the linear space spanned by the selected variables.
A consistent estimate of is
where the sum is over , and . An approximate confidence interval for is where is the standard deviation of the ‘s.
The validity of this confidence interval is essentially distribution free. In fact, if want to be purely distribution free and avoid asymptotics, we could instead define to be the median of the law of . Then the order statistics of the ‘s can be used in the usual way to get a finite sample, distribution free confidence interval for .
Estimates and confidence intervals for can be obtained from where
Estimates and confidence intervals for can be obtained by standard least squares procedures based on . The steps are summarized here:
The HARNESS
Input: data .
- Randomly split the data into two halves and .
- Use to select a subset of variables . This can be forward stepwise, the lasso, or any other method.
- Let be the predictive risk of the selected model on a future pair , conditional on .
- Using construct point estimates and confidence intervals for , and .
The HARNESS bears some similarity to POSI (Berk et al 2013) which is another inference method for model selection. They both eschew any assumption that the linear model is correct. But POSI attempts to make inferences that are valid over all possible selected models while the HARNESS restricts attention to the selected model. Also, the HARNESS emphasizes predictive inferential statement.
Here is an example using the wine dataset. (Thanks to the authors for providing the data.) Using the first half of the data we applied forward stepwise selection and used to select a model. The selected variables are Alcohol, Volatile-Acidity, Sulphates, Total-Sulfur-Dioxide and pH. A 95 percent confidence interval for the predictive risk of the null model is (0.65,0.70). For the selected model, the confidence interval for is (0.46,0.53). The (Bonferroni-corrected) 95 percent confidence intervals for the ‘s are shown in the first plot below. The (Bonferroni-corrected) 95 percent confidence intervals for the parameters of the projected model are shown in the second plot below.
5. The Value of Data-Splitting
Some statisticians are uncomfortable with data-splitting. There are two common objections. The first is that the inferences are random: if we repeat the procedure we will get different answers. The second is that it is wasteful.
The first objection can be dealt with by doing many splits and combining the information appropriately. This can be done but is somewhat involved and will be described elsewhere. The second objection is, in my view, incorrect. The value of data splitting is that leads to simple, assumption free inference. There is nothing wasteful about this. Both halves of the data are being put to use. Admittedly, the splitting leads to a loss of power compared to ordinary methods if the model were correct. But this is a false comparison since we are trying to get inferences without assuming the model is correct. It’s a bit like saying that nonparametric function estimators have slower rates of convergence than parametric estimators. But that is only because the parametric estimators invoke stronger assumptions.
6. Conformal Prediction
Since I am focusing my discussion on regression methods that make weak assumptions, I would also like to briefly mention Vladimir Vovk’s theory of conformal inference. This is a completely distribution free, finite sample method for predictive regression. The method is described in Vovk, Gammerman and Shafer (2005) and Vovk, Nouretdinov and Gammerman (2009). Unfortunately, most statisticians seem to be unaware of this work which is a shame. The statistical properties (such as minimax properties) of conformal prediction were investigated in Lei, Robins, and Wasserman (2012) and Lei and Wasserman (2013).
A full explanation of the method is beyond the scope of this discussion but I do want to give the general idea and say why it is related the current paper. Given data suppose we observe a new and want to predict . Let be an arbitrary real number. Think of as a tentative guess at . Form the augmented data set
Now we fit a linear model to the augmented data and compute residuals for each of the observations. Now we test . Under , the residuals are invariant under permutations and so
is a distribution-free p-value for .
Next we invert the test: let . It is easy to show that
Thus is distribution-free, finite-sample prediction interval for . Like the HARNESS, the validity of the method does not depend on the linear model being correct. The set has the desired coverage probability no matter what the true model is. Both the HARNESS and conformal prediction use the linear model as a device for generating predictions but neither requires the linear model to be true for the inferences to be valid. (In fact, in the conformal approach, any method of generating residuals can be used. It does not have to be a linear model. See Lei and Wasserman 2013.)
One can also look at how the prediction interval changes as different variables are removed. This gives another assumption free method to explore the effects of predictors in regression. Minimizing the length of the interval over the lasso path can also be used as a distribution-free method for choosing the regularization parameter of the lasso.
On a related note, we might also be interested in assumption-free methods for the related task of inferring graphical models. For a modest attempt at this, see Wasserman, Kolar and Rinaldo (2013).
7. Causation
LTTT do not discuss causation. But in any discussion of the assumptions underlying regression, causation is lurking just below the surface. Indeed, there is a tendency to conflate causation and inference. To be clear: prediction, inference and causation are three separate ideas.
Even if the linear model is correct, we have to be careful how we interpret the parameters. Many articles and textbooks describe as the change in if is changed, holding the other covariates fixed. This is incorrect. In fact, is the change in our prediction of if is changed. This may seem like nit picking but this is the very difference between association and causation.
Causation refers to the change in as is changed. Association (prediction) refers to the change in our prediction of as is changed. Put another way, prediction is about while causation is about . If is randomly assigned, they are the same. Otherwise they are different. To make the causal claim, we have to include in the model, every possible confounding variable that could affect both and . This complete causal model has the form
where represents all confounding variables in the world. The relationship between and alone is described as
The causal effect — the change in as is changed — is given by . The association (prediction) — the change in our prediction of as is changed — is given by . If there are any omitted confounding variables then these will be different.
Which brings me back to the paper. Even if the linear model is correct, we still have to exercise great caution in interpreting the coefficients. Most users of our methods are non-statisticians and are likely to interpret causally no matter how many warnings we give.
8. Conclusion
LTTT have produced a fascinating paper that significantly advances our understanding of high dimensional regression. I expect there will be a flurry of new research inspired by this paper.
My discussion has focused on the role of assumptions. In low dimensional models, it is relatively easy to create methods that make few assumptions. In high dimensional models, low-assumption inference is much more challenging.
I hope I have convinced the authors that the low assumption world is worth exploring. In the meantime, I congratulate the authors on an important and stimulating paper.
Acknowledgements
Thanks to Rob Kass, Rob Tibshirani and Ryan Tibshirani for helpful comments.
References
Berk, Richard and Brown, Lawrence and Buja, Andreas and Zhang, Kai and Zhao, Linda. (2013). Valid post-selection inference. The Annals of Statistics, 41, 802-837.
Greenshtein, Eitan and Ritov, Ya’cov. (2004). Persistence in high-dimensional linear predictor selection and the virtue of overparametrization. Bernoulli, 10, 971-988.
Juditsky, A. and Nemirovski, A. (2000). Functional aggregation for nonparametric regression. Ann. Statist., 28, 681-712.
Leeb, Hannes and Potscher, Benedikt M. (2008). Sparse estimators and the oracle property, or the return of Hodges’ estimator. Journal of Econometrics, 142, 201-211.
Leeb, Hannes and Potscher, Benedikt M. (2005). Model selection and inference: Facts and fiction. Econometric Theory, 21, 21-59.
Lei, Robins, and Wasserman (2012). Efficient nonparametric conformal prediction regions. Journal of the American Statistical Association.
Lei, Jing and Wasserman, Larry. (2013). Distribution free prediction bands. J. of the Royal Statistical Society Ser. B.
Vovk, V., Gammerman, A., and Shafer, G. (2005). Algorithmic Learning in a Random World. Springer.
Vovk, V., Nouretdinov, I., and Gammerman, A. (2009). On-line predictive linear regression. The Annals of Statistics, 37, 1566-1590.
Wasserman, L., Kolar, M. and Rinaldo, A. (2013). Estimating Undirected Graphs Under Weak Assumptions. arXiv:1309.6933.
Wasserman, Larry and Roeder, Kathryn. (2009). High dimensional variable selection. Annals of statistics, 37, p 2178.