Robustness in train and test time
Windows On Theory 2021-04-01
Scribe notes by Praneeth Vepakomma
Previous post: Unsupervised learning and generative models Next post: TBD. See also all seminar posts and course webpage.
lecture slides (pdf) – lecture slides (Powerpoint with animation and annotation) – video
In this blog post, we will focus on the topic of robustness – how well (or not so well) do machine learning algorithms perform when either their training or testing/deployment data differs from our expectations.
We will cover the following areas:
- Math/stat refresher
- Multiplicative weights algorithm
- Robustness
- train-time
- robust statistics
- robust mean estimation
- data poisoning
- test-time
- distribution shifts
- adversarial perturbations
- train-time
Math/stat refresher
KL refresher
The KL-divergence between the probability distributions is the expectation of the log of probability ratios:
KL-divergence is always non-negative and can be decomposed as the difference between negative entropy and cross-entropy. The non-negativity property thereby implies that distributions ,
Introduction to concentration
Consider a Gaussian random variable . Concentration is the phenomenon that if you have i.i.d random variables with expectation , then the empirical average is distributed approximately like Therefore the standard deviation of the empricial average is smaller than the standard deviation of each of the ‘s. The central limit theorem ensure this asymptotically w.r.t , while non-asymptotic versions of this phenomenon can be seen via popular inequalities such as the Chernoff/Hoeffding/Bernstein styled inequalities. These roughly have the form
Matrix notations
Norms
We denote the spectral norm of a matrix by and the Frobenius norm by .
PSD Ordering
We use the matrix ordering if for all vectors . We similarly refer to if .
Matrix/vector valued random variables
Vector valued normals: If is a vector, and is a psd covariance matrix, with normal over with
For every psd matrix V, there exists a Normal distribution where
Standard vector-valued normal: This refers to the case when (or and
Matrix concentration
For scalar random variables, we saw that if are i.i.d over bounded with expectation then
If the random variables were vectors instead, we can easily generalize this to samples of . Generalizing to matrices is far more interesting, especially when the norm under consideration is the spectral norm instead of the Frobenius norm.
Matrix Bernstein inequality:
If i.i.d symmetric matrices in with (i.e bounded in spectral norm), then
Note that is a matrix in this case, instead. The norm under consideration is the spectral norm. Note that the difference in the inequality w.r.t the scalar case is the additional multiplicative factor of d or an additive factor of in the exponent as in the equations above. Please refer to Tropp, 2015 Chapter 6 for formally precise statements.
Another property that follow is the the expected norm of the difference follows
There are some long-standing conjectures and results on cases when one can get rid of this additional factor of .
Trivia on log factors: For example as a tangent (as well as some trivia!) on some results of this flavor, where a factor was replaced by a constant, includes Spencer’s paper from 1985 titled ‘Six standard deviations suffice’, which showed that a constant of 6 would suffice in certain bounds where a naive result would instead give a dependency. The Kadison-singer problem (by Spielman-Srivastava) and the Paulsen problem are two other examples of works in this flavor.
Random matrices
For a random symmetric matrix matrix with , the spectrum of eigenvalues is distributed according to Wigner semi-circle law on a support between as shown in the figure below. Note that most mass is observed close to .
Marchenko-Pastur distribution (for random empirical covariance matrices)
For we have is the empirical estimate for covariance of the eigenvalues are distributed according to the Marchenko-Pastur distribution. In the case when, , the eigenvalues will be bounded away from zero. When , then it has a lot more mass close to (way more than the semi-circle law). When , then there is a spike of eigenvalues at and the rest are bounded away from .
Connections to the stability of linear regression In the regime, when , linear regression is most unstable, which is the case when most of the eigenvalues of the empirical covariance are close to . When , although linear regression is over-parametrized, the solution is not exact, but the approximate solution is still pretty stable as in the case of . When , the condition number is infinite, and therefore we use a pseudo-inverse as opposed to the usual inverse in computing the solution for linear regression. This ignores the subspace on which the matrix has eigenvalues while the inverse is performed. The condition number of the subspace of the non-zero eigenvectors is finite.
Digression: Multiplicative Weights Algorithm
Note: Its variants/connections include techniques/topics of, Follow The Regularized Leader / Regret Minimization / Mirror Descent. Elad Hazan’s lecture notes on online convex optimization and Arora, Hazan, Kale’s article on multiplicative and matrix multiplicative weights are a great reading source for these topics.
The following is a general setup in online optimization and online learning.
Setup: possible actions
At time , we incur loss for action at time . After this action, we also learn the loss for actions we did not take. (This is referred to as the experts model in online learning, in contrast to the bandit model where we only learn the loss for the action taken; the experts model is an easier setup than the bandit model.)
The following is the overall approach for learning to optimize in this online setup.
Overall Approach:
- Initialize distribution over action space . We then take a step based on this distribution and observe the incurred loss.
- Then the distribution is updated as by letting . i.e, if an action gave a bad outcome in terms of loss, we downweight it’s probability for the next draw of action to be taken. is the penalty terms that governs the aggresiveness of the downweighting.
The hope is that this approach converges to a “good aggregation or strategy”. We measure the quality of the strategy using regret.
The difference between the cost we incur and the optimal action (or probability distribution over actions) in hindsight is known as the (average) regret.
Note that we compare the loss we incur with the best loss over a fixed (i.e., nonadaptive) probability distribution over the set of possible actions. It can be shown that the best such loss can always be achieved by a distribution that puts all the weight on a single action.
The following theorem bounds the regret.
The ‘prior ignorance’ term captures how good the initialization is. The ‘sensitivity per step’ term captures how aggressive the exploration is (i.e this term governs the potential difference between loss incurred at versus ). As also occurs in the denominator of the prior ignorance term, it needs to be carefully chosen to balance the two terms properly.
The first inequality of this theorem is always true for any whether is optimal strategy or not. The second (right-most) inequality is true when optimal is the delta function (which it will be in optimal strategy), is initialized to uniform distribution, and is set to be . Note that in this case, the divergence .
To state the ‘overall approach’, more precisely, there needs to be a normalization at every step as below The above can be rearranged as follows upon taking log
By substituting this in below
we get the following expanded version of what the regret amounts to be:
The last equation above is due to the telescoping sum where adjacent terms cancel out except the first and last terms.
The last inequality here is because the cross-entropy is always at least as large as the entropy.
So now we have this so far
PF: Regret
Now there is this property that if s.t. for then Therefore upon substituting this, we prove the upper bound stated over the regret.
This subclaim that was used can be proved as follows
Generalization of multiplicative weights: Follow The Regularized Leader (FTRL)
When the set K of actions is convex (set of probability distributions on discrete actions is convex),
At time , it makes a choice and learns cost function (so the setup is again like the experts model unlike the bandit model).
Now the new action in FTRL is based on the following optimization which is a regularized (with ) loss: FTRL:
In this case, the regret bound is given by the following theorem.
refers to the optimal choice. This indeed has a similar flavor to the theorem we saw for multiplicative weights. To be precise when we have that the multiplicate weights becomes FTRL. Similarly there is a view that connects multiplicative weights to mirror descent. (Nisheeth Vishnoi has notes on this.)
Train-Time Robustness
We now look at some robustness issues specific to the training phase in machine learning.
For example, during training, there could be adversarially poisoned examples in the training dataset that damage the trained model’s performance.
Setup: Suppose samples are generated from a genuine data distribution and samples are maliciously chosen from an arbitrary distribution. i.e, in formal notation: are arbitrary. (While for convenience we assume here that the last items are maliciously chosen, the learning algorithm does not get the data in order.)
Assume for
Let us start with the task of estimating the mean under this setup.
Mean estimation: Estimate
Noiseless case: In the noiseless case, the best estimate for the mean under many measures is the empirical mean
Standard concentration results show that for and for a general that .
Adversarial case: If there is even a single malicious sample (that can be arbitrarily large or small) then it is well known that the empirical mean can give an arbitrarily bad approximation of the true population mean. For we can use the empirical median . This is guaranteed to lie in the quantile of real data. This is most optimal because of the property that .
In a higher-dimension, this story instead goes as follows:
Setup: arbitrary.
Median of coordinates (a starter solution): When , we have per coordinate as an initial naive solution. (i.e a median of coordinates). In this case, say if we have then an adversary can perurb the data to make fraction of the points be where is some large number. This will shift in each coordinate the distribution to have median instead of median , and hence make which implies that for some constant .
The obvious next question is: Can we do better? i.e, can we avoid paying this dimension-dependent price?
Yes we can! We can use the ‘Tukey Median’!.
What is a Tukey median?
Informally via the picture below, if you have a Tukey median (red), no matter which direction you take from it and look at the half-space, it exactly partitions the data into about half the # of points.
Formal definition of Tukey median is given as follows (fixing some parameters):
A Tukey median* of is a vector s.t. for every nonzero
A Tukey median need not always exist for a given data. However, we will show that if the data is generated at random according to a nice distribution, and then at most fraction of it is perturbed, a Tukey median will exist with high probability. In fact, the true population mean will be such a median.
Existence of Tukey median
THM: If and then
- Tukey median exists and
- For every Tukey median , .
Together 1. and 2. mean that if we search over all vectors and output the first Tukey median that we find, then (a) this process will terminate and (b) its output will be a good approximation for the population mean. In particular, we do not have to pay the extra cost needed in the median of coordinates!
** Proof of 1 (existence):** The mean itself is the Tukey median in this case because, , if we define then these are i.i.d ±1 vars of mean zero, and thus:
for and . Through discretization, we can pretend (losing some constants) that there are only unit vectors in . Hence we can use the union bound to conclude that for every unit , if we restrict attention to the “good” (i.e., unperturbed) vectors , then the fraction of them satisfying will be in . Since the adversary can perturb at most vectors, the overall frection of ‘s such that will be in . QED(1)
** Proof of 2:** Let be the population mean (i.e., the “good” ‘s are distributed according to ). Suppose for simplicity, and toward a contradiction, that .
Let . Then, .
Note that is distributed as , and so we get that is distributed as . Hence, if then . This implies that via a similar concentration argument as above, for every , there will be with high probability at most fraction of ‘s such that , contradicting our assumption that was a Tukey median. QED(2)
Exactly computing the Tukey median is NP-hard, but efficient algorithms for robust mean estimation of normals and other distributions exist as referred to in Jerry Li’s lecture notes. In particular, we can use the following approach using spectral signatures and filtering.
Spectral Signatures can efficiently certify that a given vector is in fact a robust mean estimator. Let and arbitrary Let be the empirical mean and be the empirical co-variance. Then we can bound the error of as follows:
Claim:
In other words, if the spectral norm of the empirical covariance matrix is small, then the empirical mean is a good estimator for the population mean.
Note: If all points are from then .
The Proof for this claim is given below
Explanation: Here, we assume the for simplicity without loss of generality. The norm can be split into additive terms on good points (green text above) and malicious points (red text above). The first term of this inequality follows from standard concentration. For the second (red) term, we can modify it by adding and subtracting . We can then apply the Cauchy-Schwartz (cs) inequality to prove it. Upon rearranging the terms and dividing by the norm of , we get the desired result.
Filtering is an approach to turn the certificate into an algorithm to actually find the estimator. The idea is that the same claim holds for non-uniform reweighting of data points to estimate the empirical mean and covariance. Hence we can use a violation of the spectral norm condition (the existence of a large eigenvalue) to assign “blame” to some data points and down-weigh their contributions until we reach a probability distribution over points that on the one hand is spread on roughly fraction of the points and on the other hand leads to a weighted empirical covariance matrix with small spectral norm. The above motivates the following robust mean estimation algorithm in the online case as below:
Explanation: We first compute the mean and covariance based on uniform weighting based, and the certificate is checked via the spectral norm of the covariance. If the quality isn’t good enough, then the blame is given to the largest eigenvector of the covariance that contributes to the most error. The weighting is now improved from the uniform initialization via the multiplicative weights styled update as given in step 3. Jerry Li and Jacob Steinhardt have wonderful lecture notes on these topics. The algorithm above is computationally efficient while the bounds are not that tight.
SoS algorithms: Another approach is via sum of squares algorithms where the guarantees for statistical robustness are much tighter, but they are computationally not very efficient although they are polynomial time. A hybrid approach might as well give a balance of these based on the problem at hand to bridge this gap between computational efficiency vs. statistical efficiency.
A list of relevant references is given below.
Test-time robustness
We now cover robustness issues with distribution shift and adversarial data poisoning during the testing phase in machine learning.
Warm-up with pictures
As shown in Steinhardt, Koh, Liang, 2017 the images below illustrate how poisoning samples can drastically alter the performance of a classifier from good to bad.
Shafahi , Huang, Najibi, Suciu, Studer, Dumitras, Goldstein, 2018 showed how poisoning images look perfectly fine to the human perception, while they flip the model’s performance where a fish is considered to be a dog and vice versa as shown below.
Another problem with regards to test-time robustness is the issue of domain shift where the distribution of test data is different from the distribution of training data.
vs. . If L is bounded, then if Lipschitz constant is known then distances like earth mover’s distance, T.V distance can be used to bound this. But one needs to be quite careful or these bounds are too large or close to vacuous.
For example, images of dogs taken say in a forest vs. images of dogs taken on roads have huge distance in measures like the ones mentioned above, as many pixels (although not important pixels) across the images are very different and there could be a classifier that performs terribly across the test while it does good on train. But magically, there appears to be a linear relationship between the accuracy on vs accuracy on . Typically one would expect that the line would be below the x=y (45 degree) line.
i.e say if it was trained on cifar 10 () and tested on cifar10.2 () then finding this line to be a bit lower than the line is not surprising but this linear correlation is quite surprising. It is even more surprising if the datasets are different-say in the case when performance on photos vs illustrations or cartoons of these photos was considered.
What are the potential reasons (intuitively)?
i) Overfitting on cifar test,
ii) Fewer human annotators per image introduces skew towards hardness of dataset,
iii) Running out of images by human annotators as they might end up choosing images that are easier to annotate.
If we achieve better accuracy on than we achieved on , it is a strong indicator of a drop in hardness. If the errors are in the other direction, then this reasoning isn’t as clear.
A toy theoretical model
Here is a toy model can demonstrate this surprising linear relationship between accuracies under domain shift. Let us do a thought experiment where things are linear. Let us assume there was a true vector cat direction in terms of the representation(feature) as shown in the cartoon below. Let there be some correlated idiosyncratic correlations. An example, idiosyncrasy is say due to cats tending to be photographed more in indoors than in outdoors.
Consider to be a point that needs to be labeled.
In some dataset consider the probability that is labeled as a cat is proportional to the following exponential as: labeled cat
where to denote the theidiosyncratic correlation factor. This is the exponent of the dot product of the image to be labeled with the CAT direction and the idiosyncratic direction. The same can be done for a dataset as
Dataset labeled cat
Intuitively is like the signal to noise ratio. That is, if then is a harder dataset to classify than .
So in this toy model, the best accuracy that can be reached for any linear classifier is given by the following, where the softmax of the RHS is the classification probability.
The first term of this accuracy is the universal and transferable part and the second term is the idiosyncratic part.
The following is an For the we assume that if is trained on then we assume . This is because if is trained on , then idiosyncratic directions of are orthogonal to each other. So the and the accuracy will be just this term of .
If the model is learnt by gradient descent then the gradient direction will always be proportional to as the gradient is of the form So, if is trained on , then Noise Noise. Here, is given by . Therefore the accuracies will be as follows
Therefore, we see this form of a linear relationship:
Note that:
- iff harder than
- grows with idiosyncratic component of
Although this is a toy theoretical model, it explains a linear relationship. However, finding a model that explains this linear relationship in real-life will be an interesting project to think of.
Adversarial perturbations
We now move to the last (yet another active) topic of our blog: adversarial perturbations. As an introductory example (taken from this tutorial), the hog image was originally classified as a hog with probability. A small amount of noise was added to get the image which to the human eye perceptually looks indistinguishable. That said the model now ends up misclassifying the noised hog to something that is not a hog.
Should we be surprised with the efficacy of such adversarial perturbations? Originally-certainly yes, but not as much now in hindsight!
In this example it turns out that the magnitude of each element satisfies and the 2-norm of this vector is roughly as . Note that the Resnet50 model outputs at the penultimate layer has a dimension . We scale such that . There is a Lipschitz constant w.r.t the preservation of the following norms: . The final classification decision is made by looking at where is a unit vector such that unit vector s.t. is . We assume some randomness in the decision as being for simplicity. So we have where .We now have the probability that is not a hog as
is not hog .
As we know that the observed probability of being not a hog is 0.0996, we can calculate that . Upon normalizing to be , We can expect the following square of this dot product w.r.t the representation as: Therefore the norm of the projection of to the HOG direction is given by .
So say if the 2048 vector had one larger element which accounts for the HOG direction, although it still accounts for a small proportion of its total norm. Therefore it wouldn’t need too much noise to flip one class to its wrong class label as shown in the cartoons below.
So if the Lipschitz constant is greater than around 2.5 or 3, then a fraction of 1/25 is enough to zero out the hog direction (as ).
Robust loss
What are some strategies for training neural networks that are robust to perturbations?
- A set of transformation that do not change the true nature of the data such as for e.g: where the set is
i.e, they only perturb the image or data sample by atmost upon applying that transformation. Now, given a loss function and a classifier , a robust loss function of at point is defined as
Robust training:
Given and arobust loss function, a robust training would involve the minimization of this loss which gives us:
Now for the subgoal of finding for the sake of optimization, invoking Danskin’s Theorem will greatly help. The theorem basically says that if is nice (diff, continuous) and if is compact then we have that:
i.e for any function , that depends on and as long as the function space from which needs to be chosen is nice (diff, continuous) and compact, then to find the gradient of the maximum of the function then by the theorem one can find maximizer for any particular , then that can give the required gradient (after some fine print that is given in note below). Note: The paper below extends to the case when non unique though there is other fine print. See Appendix A of (Madry Makelov Schmidt Tsipras Vladu 2017).
On the empirical side, there seems to be a trade-off. If one wants to achieve adversarial robustness, it can be achieved by letting go of some model accuracy. See discussion here and the papers cited below.