Bootstrapping and Subsampling: Part II
Normal Deviate 2013-03-18
BOOTSTRAPPING AND SUBSAMPLING: PART II
In part I we discussed the bootstrap and we saw that sometimes it does not work. By “doesn’t work” I mean that the coverage of the nominal level confidence intervals does not approach as the sample size increases. In this post I’ll discuss subsampling which works under broader conditions than the bootstrap.
1. Subsampling
Suppose that . Let be some estimator of .
We need to make one main assumption:
Assume that converges in distribution to some continuous, non-degenerate distribution .
We do not need to know ; we only need to know that such a exists. We don’t even need to know but for simplicity I’ll assume that . So our assumption is that converges to some . Here is the subsampling algorithm.
- Choose a number such that
as .
- Draw observations (without replacement) from the data This is one subsample. Repeat this times. So we have subsamples, each of size . Denote these subsamples by .
- From each subsample we compute the estimator. This yields values
Note that each estimator is based on one subsample of size .
- Now define
where denotes the indicator function.
- Let and be the and quantiles of :
- Define the confidence interval
Theorem:
as .
Note: There are possible subsamples of size . In practice it is not necessary to use all of these subsamples. Usually, one selects, say, subsamples at random.
2. Why It Works
Here is an outline of the proof of the Theorem. (See Chapter 2 of Politis, Romano and Wolf 1999 for details.) Define
By assumption . Also, since as . The key fact is that as . To see this, note that
where . This follows since and since is converging in distribution. So,
where
By Hoeffding’s inequality for U-statistics, we have that, for every ,
which goes to 0 since as . So
The coverage of is
But
The amazing thing about subsampling is that there are very few regularity conditions. It works under almost no conditions. This is in contrast to the bootstrap which does require fairly strong conditions to yield valid confidence intervals.
3. What’s the Catch?
There are a few catches. First, you need to choose . The only requirement is that and as . It’s great that it works for such a large range of values but we need a way to pick .
Fortunately, there are methods for choosing . I won’t go into detail, rather, I’ll point you to Chapter 9 of Politis, Romano, and (1999) and also Bickel and Sakov (2008).
A more serious problem is that the confidence guarantee is only an asymptotic guarantee. However, this is true of many statistical methods. I prefer to use methods with finite sample guarantees whenever possible but sometimes there is no choice but to resort to large sample approximations. Come to think of it, a good topic for a future blog post would be: for which problems are there accurate methods with finite sample guarantees?
Another concern about subsampling and bootstrapping is that they are computationally intensive. For a recent and very interesting approach to dealing with the computational burden, see this recent paper by Ariel Kleiner, Ameet Talwalkar, Purnamrita Sarkar and Mike Jordan.
By the way, see this paper by Joe Romano and Azeem Shaikh for some recent theoretical advances.
4. Conclusion
The bootstrap and subsampling are powerful techniques for constructing confidence intervals. Subsampling works under weaker conditions and relies on simpler theory. However, subsampling requires choosing a tuning parameter (the size of the subsamples) which probably explains why it is less popular. I think that if we had a fast method for automatically choosing , then subsampling would replace bootstrapping as the method of choice.
References
Bickel, P.J. and Sakov, A. (2008). On the choice of m in the m out of n bootstrap and confidence bounds for extrema. Statistica Sinica, 18, 967-985.
Politis, D.N. and Romano, J.P. and Wolf, M. (1999). Subsampling, Springer Verlag.