Run this notebook online: or Colab:
11.7. Adagrad¶
Let us begin by considering learning problems with features that occur infrequently.
11.7.1. Sparse Features and Learning Rates¶
Imagine that we are training a language model. To get good accuracy we typically want to decrease the learning rate as we keep on training, usually at a rate of \(\mathcal{O}(t^{-\frac{1}{2}})\) or slower. Now consider a model training on sparse features, i.e., features that occur only infrequently. This is common for natural language, e.g., it is a lot less likely that we will see the word preconditioning than learning. However, it is also common in other areas such as computational advertising and personalized collaborative filtering. After all, there are many things that are of interest only for a small number of people.
Parameters associated with infrequent features only receive meaningful updates whenever these features occur. Given a decreasing learning rate we might end up in a situation where the parameters for common features converge rather quickly to their optimal values, whereas for infrequent features we are still short of observing them sufficiently frequently before their optimal values can be determined. In other words, the learning rate either decreases too quickly for frequent features or too slowly for infrequent ones.
A possible hack to redress this issue would be to count the number of times we see a particular feature and to use this as a clock for adjusting learning rates. That is, rather than choosing a learning rate of the form \(\eta = \frac{\eta_0}{\sqrt{t + c}}\) we could use \(\eta_i = \frac{\eta_0}{\sqrt{s(i, t) + c}}\). Here \(s(i, t)\) counts the number of nonzeros for feature \(i\) that we have observed up to time \(t\). This is actually quite easy to implement at no meaningful overhead. However, it fails whenever we do not quite have sparsity but rather just data where the gradients are often very small and only rarely large. After all, it is unclear where one would draw the line between something that qualifies as an observed feature or not.
Adagrad by [Duchi et al., 2011] addresses this by replacing the rather crude counter \(s(i, t)\) by an aggregate of the squares of previously observed gradients. In particular, it uses \(s(i, t+1) = s(i, t) + \left(\partial_i f(\mathbf{x})\right)^2\) as a means to adjust the learning rate. This has two benefits: first, we no longer need to decide just when a gradient is large enough. Second, it scales automatically with the magnitude of the gradients. Coordinates that routinely correspond to large gradients are scaled down significantly, whereas others with small gradients receive a much more gentle treatment. In practice this leads to a very effective optimization procedure for computational advertising and related problems. But this hides some of the additional benefits inherent in Adagrad that are best understood in the context of preconditioning.
11.7.2. Preconditioning¶
Convex optimization problems are good for analyzing the characteristics of algorithms. After all, for most nonconvex problems it is difficult to derive meaningful theoretical guarantees, but intuition and insight often carry over. Let us look at the problem of minimizing \(f(\mathbf{x}) = \frac{1}{2} \mathbf{x}^\top \mathbf{Q} \mathbf{x} + \mathbf{c}^\top \mathbf{x} + b\).
As we saw in Section 11.6, it is possible to rewrite this problem in terms of its eigendecomposition \(\mathbf{Q} = \mathbf{U}^\top \boldsymbol{\Lambda} \mathbf{U}\) to arrive at a much simplified problem where each coordinate can be solved individually:
Here we used \(\mathbf{x} = \mathbf{U} \mathbf{x}\) and consequently \(\mathbf{c} = \mathbf{U} \mathbf{c}\). The modified problem has as its minimizer \(\bar{\mathbf{x}} = -\boldsymbol{\Lambda}^{-1} \bar{\mathbf{c}}\) and minimum value \(-\frac{1}{2} \bar{\mathbf{c}}^\top \boldsymbol{\Lambda}^{-1} \bar{\mathbf{c}} + b\). This is much easier to compute since \(\boldsymbol{\Lambda}\) is a diagonal matrix containing the eigenvalues of \(\mathbf{Q}\).
If we perturb \(\mathbf{c}\) slightly we would hope to find only slight changes in the minimizer of \(f\). Unfortunately this is not the case. While slight changes in \(\mathbf{c}\) lead to equally slight changes in \(\bar{\mathbf{c}}\), this is not the case for the minimizer of \(f\) (and of \(\bar{f}\) respectively). Whenever the eigenvalues \(\boldsymbol{\Lambda}_i\) are large we will see only small changes in \(\bar{x}_i\) and in the minimum of \(\bar{f}\). Conversely, for small \(\boldsymbol{\Lambda}_i\) changes in \(\bar{x}_i\) can be dramatic. The ratio between the largest and the smallest eigenvalue is called the condition number of an optimization problem.
If the condition number \(\kappa\) is large, it is difficult to solve the optimization problem accurately. We need to ensure that we are careful in getting a large dynamic range of values right. Our analysis leads to an obvious, albeit somewhat naive question: couldn’t we simply “fix” the problem by distorting the space such that all eigenvalues are \(1\). In theory this is quite easy: we only need the eigenvalues and eigenvectors of \(\mathbf{Q}\) to rescale the problem from \(\mathbf{x}\) to one in \(\mathbf{z} := \boldsymbol{\Lambda}^{\frac{1}{2}} \mathbf{U} \mathbf{x}\). In the new coordinate system \(\mathbf{x}^\top \mathbf{Q} \mathbf{x}\) could be simplified to \(\|\mathbf{z}\|^2\). Alas, this is a rather impractical suggestion. Computing eigenvalues and eigenvectors is in general much more expensive than solving the actual problem.
While computing eigenvalues exactly might be expensive, guessing them and computing them even somewhat approximately may already be a lot better than not doing anything at all. In particular, we could use the diagonal entries of \(\mathbf{Q}\) and rescale it accordingly. This is much cheaper than computing eigenvalues.
In this case we have \(\tilde{\mathbf{Q}}_{ij} = \mathbf{Q}_{ij} / \sqrt{\mathbf{Q}_{ii} \mathbf{Q}_{jj}}\) and specifically \(\tilde{\mathbf{Q}}_{ii} = 1\) for all \(i\). In most cases this simplifies the condition number considerably. For instance, the cases we discussed previously, this would entirely eliminate the problem at hand since the problem is axis aligned.
Unfortunately we face yet another problem: in deep learning we typically do not even have access to the second derivative of the objective function: for \(\mathbf{x} \in \mathbb{R}^d\) the second derivative even on a minibatch may require \(\mathcal{O}(d^2)\) space and work to compute, thus making it practically infeasible. The ingenious idea of Adagrad is to use a proxy for that elusive diagonal of the Hessian that is both relatively cheap to compute and effective—the magnitude of the gradient itself.
In order to see why this works, let us look at \(\bar{f}(\bar{\mathbf{x}})\). We have that
where \(\bar{\mathbf{x}}_0\) is the minimizer of \(\bar{f}\). Hence the magnitude of the gradient depends both on \(\boldsymbol{\Lambda}\) and the distance from optimality. If \(\bar{\mathbf{x}} - \bar{\mathbf{x}}_0\) didn’t change, this would be all that’s needed. After all, in this case the magnitude of the gradient \(\partial_{\bar{\mathbf{x}}} \bar{f}(\bar{\mathbf{x}})\) suffices. Since AdaGrad is a stochastic gradient descent algorithm, we will see gradients with nonzero variance even at optimality. As a result we can safely use the variance of the gradients as a cheap proxy for the scale of the Hessian. A thorough analysis is beyond the scope of this section (it would be several pages). We refer the reader to [Duchi et al., 2011] for details.
11.7.3. The Algorithm¶
Let us formalize the discussion from above. We use the variable \(\mathbf{s}_t\) to accumulate past gradient variance as follows.
Here the operation are applied coordinate wise. That is, \(\mathbf{v}^2\) has entries \(v_i^2\). Likewise \(\frac{1}{\sqrt{v}}\) has entries \(\frac{1}{\sqrt{v_i}}\) and \(\mathbf{u} \cdot \mathbf{v}\) has entries \(u_i v_i\). As before \(\eta\) is the learning rate and \(\epsilon\) is an additive constant that ensures that we do not divide by \(0\). Last, we initialize \(\mathbf{s}_0 = \mathbf{0}\).
Just like in the case of momentum we need to keep track of an auxiliary variable, in this case to allow for an individual learning rate per coordinate. This does not increase the cost of Adagrad significantly relative to SGD, simply since the main cost is typically to compute \(l(y_t, f(\mathbf{x}_t, \mathbf{w}))\) and its derivative.
Note that accumulating squared gradients in \(\mathbf{s}_t\) means that \(\mathbf{s}_t\) grows essentially at linear rate (somewhat slower than linearly in practice, since the gradients initially diminish). This leads to an \(\mathcal{O}(t^{-\frac{1}{2}})\) learning rate, albeit adjusted on a per coordinate basis. For convex problems this is perfectly adequate. In deep learning, though, we might want to decrease the learning rate rather more slowly. This led to a number of Adagrad variants that we will discuss in the subsequent chapters. For now let us see how it behaves in a quadratic convex problem. We use the same problem as before:
We are going to implement Adagrad using the same learning rate previously, i.e., \(\eta = 0.4\). As we can see, the iterative trajectory of the independent variable is smoother. However, due to the cumulative effect of \(\boldsymbol{s}_t\), the learning rate continuously decays, so the independent variable does not move as much during later stages of iteration.
%load ../utils/djl-imports
%load ../utils/plot-utils
%load ../utils/Functions.java
%load ../utils/GradDescUtils.java
%load ../utils/Accumulator.java
%load ../utils/StopWatch.java
%load ../utils/Training.java
%load ../utils/TrainingChapter11.java
float eta = 0.4f;
Function<Float[], Float[]> adagrad2d = (state) -> {
Float x1 = state[0], x2 = state[1], s1 = state[2], s2 = state[3];
float eps = (float) 1e-6;
float g1 = 0.2f * x1;
float g2 = 4 * x2;
s1 += g1 * g1;
s2 += g2 * g2;
x1 -= eta / (float) Math.sqrt(s1 + eps) * g1;
x2 -= eta / (float) Math.sqrt(s2 + eps) * g2;
return new Float[]{x1, x2, s1, s2};
};
BiFunction<Float, Float, Float> f2d = (x1, x2) -> 0.1f * x1 * x1 + 2 * x2 * x2;
GradDescUtils.showTrace2d(f2d, GradDescUtils.train2d(adagrad2d, 20));
Tablesaw not supporting for contour and meshgrids, will update soon
As we increase the learning rate to \(2\) we see much better behavior. This already indicates that the decrease in learning rate might be rather aggressive, even in the noise-free case and we need to ensure that parameters converge appropriately.
eta = 2;
GradDescUtils.showTrace2d(f2d, GradDescUtils.train2d(adagrad2d, 20));
Tablesaw not supporting for contour and meshgrids, will update soon
11.7.4. Implementation from Scratch¶
Just like the momentum method, Adagrad needs to maintain a state variable of the same shape as the parameters.
NDList initAdagradStates(int featureDimension) {
NDManager manager = NDManager.newBaseManager();
NDArray sW = manager.zeros(new Shape(featureDimension, 1));
NDArray sB = manager.zeros(new Shape(1));
return new NDList(sW, sB);
}
public class Optimization {
public static void adagrad(NDList params, NDList states, Map<String, Float> hyperparams) {
float eps = (float) 1e-6;
for (int i = 0; i < params.size(); i++) {
NDArray param = params.get(i);
NDArray state = states.get(i);
// Update param
state.addi(param.getGradient().square());
param.subi(param.getGradient().mul(hyperparams.get("lr")).div(state.add(eps).sqrt()));
}
}
}
Compared to the experiment in Section 11.5 we use a larger learning rate to train the model.
AirfoilRandomAccess airfoil = TrainingChapter11.getDataCh11(10, 1500);
public TrainingChapter11.LossTime trainAdagrad(float lr, int numEpochs) throws IOException, TranslateException {
int featureDimension = airfoil.getColumnNames().size();
Map<String, Float> hyperparams = new HashMap<>();
hyperparams.put("lr", lr);
return TrainingChapter11.trainCh11(Optimization::adagrad,
initAdagradStates(featureDimension),
hyperparams, airfoil, featureDimension, numEpochs);
}
TrainingChapter11.LossTime lossTime = trainAdagrad(0.1f, 2);
loss: 0.243, 0.083 sec/epoch
11.7.5. Concise Implementation¶
We can use the Adagrad algorithm in DJL by creating an instance of
Adagrad
from Optimizer
. Then we can pass it into our
trainConciseCh11()
function defined in chapter 11.5 to train with
it!
Tracker lrt = Tracker.fixed(0.1f);
Optimizer adagrad = Optimizer.adagrad().optLearningRateTracker(lrt).build();
TrainingChapter11.trainConciseCh11(adagrad, airfoil, 2);
INFO Training on: 1 GPUs.
INFO Load MXNet Engine Version 1.9.0 in 0.064 ms.
Training: 100% |████████████████████████████████████████| Accuracy: 1.00, L2Loss: 0.26
loss: 0.243, 0.169 sec/epoch
11.7.6. Summary¶
Adagrad decreases the learning rate dynamically on a per-coordinate basis.
It uses the magnitude of the gradient as a means of adjusting how quickly progress is achieved - coordinates with large gradients are compensated with a smaller learning rate.
Computing the exact second derivative is typically infeasible in deep learning problems due to memory and computational constraints. The gradient can be a useful proxy.
If the optimization problem has a rather uneven uneven structure Adagrad can help mitigate the distortion.
Adagrad is particularly effective for sparse features where the learning rate needs to decrease more slowly for infrequently occurring terms.
On deep learning problems Adagrad can sometimes be too aggressive in reducing learning rates. We will discuss strategies for mitigating this in the context of Section 11.10.
11.7.7. Exercises¶
Prove that for an orthogonal matrix \(\mathbf{U}\) and a vector \(\mathbf{c}\) the following holds: \(\|\mathbf{c} - \mathbf{\delta}\|_2 = \|\mathbf{U} \mathbf{c} - \mathbf{U} \mathbf{\delta}\|_2\). Why does this mean that the magnitude of perturbations does not change after an orthogonal change of variables?
Try out Adagrad for \(f(\mathbf{x}) = 0.1 x_1^2 + 2 x_2^2\) and also for the objective function was rotated by 45 degrees, i.e., \(f(\mathbf{x}) = 0.1 (x_1 + x_2)^2 + 2 (x_1 - x_2)^2\). Does it behave differently?
Prove Gerschgorin’s circle theorem which states that eigenvalues \(\lambda_i\) of a matrix \(\mathbf{M}\) satisfy \(|\lambda_i - \mathbf{M}_{jj}| \leq \sum_{k \neq j} |\mathbf{M}_{jk}|\) for at least one choice of \(j\).
What does Gerschgorin’s theorem tell us about the eigenvalues of the diagonally preconditioned matrix \(\mathrm{diag}^{-\frac{1}{2}}(\mathbf{M}) \mathbf{M} \mathrm{diag}^{-\frac{1}{2}}(\mathbf{M})\)?
Try out Adagrad for a proper deep network, such as Section 6.6 when applied to Fashion MNIST.
How would you need to modify Adagrad to achieve a less aggressive decay in learning rate?