What do deep networks learn and when do they learn it
Windows On Theory 2021-02-17
Scribe notes by Manos Theodosis
Previous post: A blitz through statistical learning theory Next post: TBD. See also all seminar posts and course webpage.
Lecture video – Slides (pdf) – Slides (powerpoint with ink and animation)
In this lecture, we talk about what neural networks end up learning (in terms of their weights) and when, during training, they learn it.
In particular, we’re going to discuss
- Simplicity bias: how networks favor “simple” features first.
- Learning dynamics: what is learned early in training.
- Different layers: do the different layers learn the same features?
The type of results we will discuss are:
- Gradient-based deep learning algorithms have a bias toward learning simple classifiers. In particular this often holds when the optimization problem they are trying to solve is “underconstrained/overparameterized”, in the sense that there are exponentially many different models that fit the data.
- Simplicity also affects the timing of learning. Deep learning algorithms tend to learn simple (but still predictive!) features first.
- Such “simple predictive features” tend to be in lower (closer to input) levels of the network. Hence deep learning also tends to learn lower levels earlier.
- On the other side, the above means that distributions that do not have “simple predictive features” pose significant challenges for deep learning. Even if there is a small neural network that works very well for the distribution, gradient-based algorithms will not “get off the ground” in such cases. We will see a lower bound for learning parities that makes this intuition formal.
What do neural networks learn, and when do they learn it?
As a first example to showcase what is learned by neural networks, we’ll consider the following data distribution where we sample points , with ( corresponding to orange points and corresponding to blue points).
If we train a neural network to fit this distribution, we can see below that the neurons that are closest to the input data end up learning features that are highly correlated with the input (mostly linear subspaces at 45-degree angle, which correspond to one of the stripes). In the subsequent layers, the features learned are more sophisticated and have increased complexity.
Neural networks have simpler but useful features in lower layers
Some people have spent a lot of time trying to understand what is learned by different layers. In a recent work, Olah et al. dig deep into a particular architecture for computer vision, trying to interpret the features learned by neurons at different layers.
They found that earlier layers learn features that resemble edge detectors. However, as we go deeper, the neurons at those layers start learning more convoluted (for example, these features from layer resemble heads).
SGD learns simple (but still predictive) features earlier.
There is evidence that SGD learns simpler classifiers first. The following figure tracks how much of a learned classifier’s performance can be accounted for by a linear classifier. We see that up to a certain point in training all of the performance of the neural network learned by SGD (measured as mutual information with the label or as accuracy) can be ascribed to the linear classifier. They diverge only very near the point where the linear classifier “saturates,” in the sense that the classifier reachers the best possible accuracy for linear models. (We use the quantity – the mutual information of and conditioned on the prediction of the linear classifier – to measure how much of ‘s performance cannot be accounted for by .)
The benefits and pitfalls of simplicity bias
In general, simplicity bias is a very good thing. For example, the most “complex” function is a random function. However, if given some observed data , SFD were to find a random function that perfectly fits it, then it would never generalize (since for every fresh , the value of would be random).
At the same time, simplicity bias means that our algorithms might focus too much on simple solutions and miss more complex ones. Sometimes the complex solutions actually do perform better. In the following cartoon a person could go to the low-hanging fruit tree on the right-hand side and miss the bigger rewards on the left-hand side.
This can actually happen in neural networks. We also saw a simple example in class:
The two datasets are equally easy to represent, but on the righthand side, there is a very strong “simple classifier” (the 45-degree halfspace) that SGD will “latch onto.” Once it gets stuck with that classifier, it is hard for SGD to get “unstuck.” As a result, SGD has a much harder time learning the righthand dataset than the lefthand dataset.
Analysing SGD for over-parameterized linear regression
So, what can we prove about the dynamics of gradient descent? Often we can gain insights by studying linear regression.
Formally, given with we would like to find a vector such that .
In this setting, we can prove that running SGD (from zero or tiny initialization) on the loss will converge to solution of minimum norm. To see whym note that SGD performs updates of the form . However note that is a scalar. Therefore all of the updates keep the updated vector within . This implies that the converging solution will also lie in . Geometrically this translates into being the projection of onto the the subspace which results in the least norm solution.
Analyzing the dynamics of descent, we can write the distance between consecutive weight updates and the converging solution as . We see that we are applying the linear operator at every step we take. As long as this operator is contractive, we will continue to progress and converge to . Formally, to make progress, we require . This directly translates into and then the progress we make is approximately , where is the condition number of .
What happens now if the matrix is random? Then, results from random matrix theory (specifically the Marchenko-Pastur distribution) state that
- if , then the matrix has and the eigenvalues are bounded away from . This means that the matrix is well conditioned.
- if , then the spectrum of starts shifting towards , with some eigenvalues being equal to zero, resulting in an ill-conditioned matrix.
- if , then the spectrum has some zero eigenvalues, but is otherwise bounded away from zero. If we restrict to the subspace of positive eigenvalues, we achieve again a good condition number.
Deep linear networks
We now want to go beyond linear regression and talk about deep networks. As deep networks are very hard to understand, we will first start analyzing a depth network. We will also consider a linear network and omit the nonlinearity. This might seem strange, as we could consider the corresponding linear model, which has exactly the same expressiveness. However, note that these two models have a different parameter space. This means that gradient-based algorithms will travel on different paths when optimizing these two models.
Specifically, we can see that the minimum loss attained by the two models will coincide, i.e., , but the SGD path and the solution will be different.
We will analyze the gradient flow on these two networks (which is gradient descent with the learning rate ). We will make the simplifying assumption that and symmetric. Then, we can see that . We will try and compare the gradient flow of two different loss functions: (doing gradient flow on a linear model) and (doing gradient flow on a depth linear model).
Gradient flow on the linear model simply gives , whereas for the deep linear network we have (using the chain rule) since is symmetric.
For simplicity, let’s denote and . We then have .
Another way to view the comparison between the models of interest, and is as follows: let , then . We can view this as follows: when we multiply the gradient with we end up making the “big bigger and the small smaller”. Basically, this accenuates the differences between the eigenvalues and is biasing to become a low-rank matrix.
To see why, you can think of a low rank matrix has one that has few large eigenvalues and the others small. If is already close low rank, then replacing a gradient by encourages the gradient steps to mostly happen in the top eigenspace of . This result generalizes to networks of greater depth, and the gradient evolves as , with .
This means that we end up doing gradient flow on a Riemannian manifold. An interesting result is that the flow induced by the operator is provably not equivalent to a regularized minimization problem for any .
What is learned at different layers?
Finally, let’s discuss what is learned by the different layers in a neural network. Some intuition people have is that learning proceeds roughly like the following cartoon:
We can think of our data as being “built up” as a sequence of choices from higher level to lower level features. For example, the data is generated by first deciding that it would be a photo of a dog, then that it would be on the beach, and finally low-level details such as the type of fur and light. This is also how a human would describe this photo. In contrast, a neural network builds up the features in the opposite direction. It starts from the simplest (lowest-level) features in the image (edges, textures, etc.) and gradually builds up complexity until it finally classifies the image.
How neural networks learn features?
To build a bit of intuition, consider an example of combining different simple features. We can see that if we try to combine two good edge detectors with different orientations, the end result will hardly be an edge detector.
So the intuition is that there is competitive/evolutionary pressure on neurons to “specialize” and recognize useful features. Initially, all the neurons are random features, which can be thought of as random linear combination of the various detectors. However, after training, the symmetry will break between the neurons, and they will specialize (in this simple example, they will either become vertical or horizontal edge detectors).
Raghu, Gilmer, Yosinski, and Sohl-Dickstein tracked the speed at which features learned by different layers reach their final learned state. In the figure below the diagonal elements denote the similarity of the current state of a layer to its final one, where lighter color means that the state is more similar. We can see that earlier layer (more to the left) reach their final state earlier (with th exception of the 2 layers closest to the output that also converge very early).
The “symmetry breaking” intuition is explored by a recent work of Frankle, Dziugaite, Roy, and Carbin. Intuitively, because the average of two good features is generally not a good feature, averaging the weights of two neural networks with small loss will likely result in a network with large loss. That is, if we start from two random initializations , and train two networks until we reach weights and with small loss, then we expect the average of and to result in a network with poor loss:
In contrast, Frankle et al showed that sometimes, when we start from the same initialization (especially after pruning) and use random SGD noise (obtained by randomly shuffling the training set) then we reach a “linear plateu” of the loss function in which averaging two networks yields a network with similar loss:
The contrapositive of simplicity: lower bounds for learning parities
If we believe that networks learn simple features first, and learn them in the early layers, then this has an interesting consequence. If the data has the form that simple features (e.g. linear or low degree) are completely uninformative (have no correlation with the label) then we may expect that learning cannot “get off the ground”. That is, even if there exists a small neural network that can learn the class, gradient based algorithms such as SGD will never find it. (In fact, it is possible that no efficient algorithm could find it.) There are some settings where we can prove such conjectures. (For gradient-based algorithms that is; proving this for all efficient algorithms would require settling the P vs NP question.)
We discuss one of the canonical “hard” examples for neural networks: parities. Formally, for , the distribution is the distribution over defined as follows: and . The “learning parity” problem is as follows: given samples drawn from , either recover or do the weaker task of finding a predictor such that with high probability over future samples .
It turns out that if we don’t restrict ourselves to deep learning, given samples we can recover . Consider the transformations and . If we let if and otherwise, we can write . Basically, we transformed the problem of parity to a problem of counting if we have an odd or an even number of . In this setting, we can think of every sample as providing a linear equation moudlo 2 over the unknown variables . When , these linear equations will be very likely to be of full rank, and hence we can use Gaussian elimination to find and hence .
Switching to the learning setting, we can express parities by using few ReLUs. In particular, we’ve shown that we can create a step function using ReLUs. Therefore for every , there is a combination of four ReLUs that computes the function such that outputs for , and outputs if . We can then write the parity function (for example for ) as . This will be a linear combination of at most ReLUs.
Parities are an example of a case where simple feature are uninformative. For example, if then for every linear function ,
in other words, there is no correlation between the linear function and the label. To see why this is true, write . By linearity of expectation, it suffices to show that $latex \mathbb{E}{(x,y) \sim D_I}[ L_ix_i y] = L_i \mathbb{E}{(x,y) \sim D_I}[ x_i y] = 0&bg=ffffff$. Both and are just values in . To evaluate the expectation we simply need to know the marginal distribution that induces on when we restrict it to these two coordinates. This distribution is just the uniform distribution. To see why this is the case, consider a coordinate and let’s condition on the values of all coordinates other than and . After conditioning on these values, for some and are chosen uniformly and independently from . For every choice of , if we flip then that would flip the value of , and hence the marginal distribution on and will be uniform.
This lack of correlation turns out to be a real obstacle for gradient-based algorithms. While small neural networks for parities exist, and Gaussian elimination can find them, it turns out that gradient-based algorithms such as SGD will fail to do so. Parities are hard to learn, and even if the capacity of the network is such that it can memorize the input, it will still perform poorly in a test set. Indeed, we can prove that for every neural network architecture , running SGD on will require steps. (Note that if we add noise to parities, then Gaussian elimination will fail and it is believed that no efficient algorithm can learn the distribution in this case. This is known as the learning parity with noise problem, which is also related to the learning with errors problem that is the foundation of modern lattice-based cryptography.)
We now sketch the proof that gradient-based algorithms require exponentially many steps to learn parities, following Theorem 1 of Shalev-Shwartz,Shamir and Shammah. We think of an idealized setting where we have an unlimited number of samples and only use a sample only once (this should only make learning easier). We will show that we make very little progress in learning , by showing that for any given , the expected gradient over will be exponentially small, and hence we make very little progress toward learning . Specifically, using the notation , for any ,
The term is independent of and so does not contribute toward learning . Hence intuitively to show we make exponentially small progress, it suffices to show that typically for every , will be exponentially small. (That is, even if for a fixed we make a large step, these all cancel out and give us exponentially small progress toward actually learning .)
Formally, we will prove the following lemma:
Lemma: For every ,
Proof: Let us fix and define . The quantity can be written as with respect to the inner product . However, is an orhtonormal basis with respect to this inner product. To see this note that since , for every , and for , where is the symmetric difference of and . The reason is that for all and so elements that appear in both and “cancel out”. Since the coordinates of are distributed independently and uniformly, the expectation of the product is the product of expectations. This means that as long as is not empty (i.e., ) this will be a product of one or more terms of the form . Since is uniform over , and so we get that if , .
Given the above
which means that (since there are subsets of ) on average . In other words, is typically exponentially small which is what we wanted to prove.