Run this notebook online:\ |Binder| or Colab: |Colab|
.. |Binder| image:: https://mybinder.org/badge_logo.svg
:target: https://mybinder.org/v2/gh/deepjavalibrary/d2l-java/master?filepath=chapter_optimization/rmsprop.ipynb
.. |Colab| image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/deepjavalibrary/d2l-java/blob/colab/chapter_optimization/rmsprop.ipynb
.. _sec_rmsprop:
RMSProp
=======
One of the key issues in :numref:`sec_adagrad` is that the learning
rate decreases at a predefined schedule of effectively
:math:`\mathcal{O}(t^{-\frac{1}{2}})`. While this is generally
appropriate for convex problems, it might not be ideal for nonconvex
ones, such as those encountered in deep learning. Yet, the
coordinate-wise adaptivity of Adagrad is highly desirable as a
preconditioner.
:cite:`Tieleman.Hinton.2012` proposed the RMSProp algorithm as a
simple fix to decouple rate scheduling from coordinate-adaptive learning
rates. The issue is that Adagrad accumulates the squares of the gradient
:math:`\mathbf{g}_t` into a state vector
:math:`\mathbf{s}_t = \mathbf{s}_{t-1} + \mathbf{g}_t^2`. As a result
:math:`\mathbf{s}_t` keeps on growing without bound due to the lack of
normalization, essentially linarly as the algorithm converges.
One way of fixing this problem would be to use :math:`\mathbf{s}_t / t`.
For reasonable distributions of :math:`\mathbf{g}_t` this will converge.
Unfortunately it might take a very long time until the limit behavior
starts to matter since the procedure remembers the full trajectory of
values. An alternative is to use a leaky average in the same way we used
in the momentum method, i.e.,
:math:`\mathbf{s}_t \leftarrow \gamma \mathbf{s}_{t-1} + (1-\gamma) \mathbf{g}_t^2`
for some parameter :math:`\gamma > 0`. Keeping all other parts unchanged
yields RMSProp.
The Algorithm
-------------
Let us write out the equations in detail.
.. math::
\begin{aligned}
\mathbf{s}_t & \leftarrow \gamma \mathbf{s}_{t-1} + (1 - \gamma) \mathbf{g}_t^2, \\
\mathbf{x}_t & \leftarrow \mathbf{x}_{t-1} - \frac{\eta}{\sqrt{\mathbf{s}_t + \epsilon}} \odot \mathbf{g}_t.
\end{aligned}
The constant :math:`\epsilon > 0` is typically set to :math:`10^{-6}` to
ensure that we do not suffer from division by zero or overly large step
sizes. Given this expansion we are now free to control the learning rate
:math:`\eta` independently of the scaling that is applied on a
per-coordinate basis. In terms of leaky averages we can apply the same
reasoning as previously applied in the case of the momentum method.
Expanding the definition of :math:`\mathbf{s}_t` yields
.. math::
\begin{aligned}
\mathbf{s}_t & = (1 - \gamma) \mathbf{g}_t^2 + \gamma \mathbf{s}_{t-1} \\
& = (1 - \gamma) \left(\mathbf{g}_t^2 + \gamma \mathbf{g}_{t-1}^2 + \gamma^2 \mathbf{g}_{t-2} + \ldots, \right).
\end{aligned}
As before in :numref:`sec_momentum` we use
:math:`1 + \gamma + \gamma^2 + \ldots, = \frac{1}{1-\gamma}`. Hence the
sum of weights is normalized to :math:`1` with a half-life time of an
observation of :math:`\gamma^{-1}`. Let us visualize the weights for the
past 40 timesteps for various choices of :math:`\gamma`.
.. code:: java
%load ../utils/djl-imports
%load ../utils/plot-utils
%load ../utils/Functions.java
%load ../utils/GradDescUtils.java
%load ../utils/Accumulator.java
%load ../utils/StopWatch.java
%load ../utils/Training.java
%load ../utils/TrainingChapter11.java
.. code:: java
NDManager manager = NDManager.newBaseManager();
float[] gammas = new float[]{0.95f, 0.9f, 0.8f, 0.7f};
NDArray timesND = manager.arange(40f);
float[] times = timesND.toFloatArray();
display(GradDescUtils.plotGammas(times, gammas, 600, 400));
.. raw:: html
.. parsed-literal::
:class: output
ff48d888-7a68-4140-a67a-030f7b920c45
Implementation from Scratch
---------------------------
As before we use the quadratic function
:math:`f(\mathbf{x})=0.1x_1^2+2x_2^2` to observe the trajectory of
RMSProp. Recall that in :numref:`sec_adagrad`, when we used Adagrad
with a learning rate of 0.4, the variables moved only very slowly in the
later stages of the algorithm since the learning rate decreased too
quickly. Since :math:`\eta` is controlled separately this does not
happen with RMSProp.
.. code:: java
float eta = 0.4f;
float gamma = 0.9f;
Function rmsProp2d = (state) -> {
Float x1 = state[0], x2 = state[1], s1 = state[2], s2 = state[3];
float g1 = 0.2f * x1;
float g2 = 4 * x2;
float eps = (float) 1e-6;
s1 = gamma * s1 + (1 - gamma) * g1 * g1;
s2 = gamma * s2 + (1 - gamma) * g2 * g2;
x1 -= eta / (float) Math.sqrt(s1 + eps) * g1;
x2 -= eta / (float) Math.sqrt(s2 + eps) * g2;
return new Float[]{x1, x2, s1, s2};
};
BiFunction f2d = (x1, x2) -> {
return 0.1f * x1 * x1 + 2 * x2 * x2;
};
GradDescUtils.showTrace2d(f2d, GradDescUtils.train2d(rmsProp2d, 20));
.. parsed-literal::
:class: output
Tablesaw not supporting for contour and meshgrids, will update soon
.. figure:: https://d2l-java-resources.s3.amazonaws.com/img/chapter_optim-rmsprop-gd2d.svg
RmsProp Gradient Descent 2D.
Next, we implement RMSProp to be used in a deep network. This is equally
straightforward.
.. code:: java
NDList initRmsPropStates(int featureDimension) {
NDManager manager = NDManager.newBaseManager();
NDArray sW = manager.zeros(new Shape(featureDimension, 1));
NDArray sB = manager.zeros(new Shape(1));
return new NDList(sW, sB);
}
public class Optimization {
public static void rmsProp(NDList params, NDList states, Map hyperparams) {
float gamma = hyperparams.get("gamma");
float eps = (float) 1e-6;
for (int i = 0; i < params.size(); i++) {
NDArray param = params.get(i);
NDArray state = states.get(i);
// Update parameter and state
// state = gamma * state + (1 - gamma) * param.gradient^(1/2)
state.muli(gamma).addi(param.getGradient().square().mul(1 - gamma));
// param -= lr * param.gradient / sqrt(s + eps)
param.subi(param.getGradient().mul(hyperparams.get("lr")).div(state.add(eps).sqrt()));
}
}
}
We set the initial learning rate to 0.01 and the weighting term
:math:`\gamma` to 0.9. That is, :math:`\mathbf{s}` aggregates on average
over the past :math:`1/(1-\gamma) = 10` observations of the square
gradient.
.. code:: java
AirfoilRandomAccess airfoil = TrainingChapter11.getDataCh11(10, 1500);
public TrainingChapter11.LossTime trainRmsProp(float lr, float gamma, int numEpochs)
throws IOException, TranslateException {
int featureDimension = airfoil.getColumnNames().size();
Map hyperparams = new HashMap<>();
hyperparams.put("lr", lr);
hyperparams.put("gamma", gamma);
return TrainingChapter11.trainCh11(Optimization::rmsProp,
initRmsPropStates(featureDimension),
hyperparams, airfoil,
featureDimension, numEpochs);
}
trainRmsProp(0.01f, 0.9f, 2);
.. parsed-literal::
:class: output
loss: 0.244, 0.088 sec/epoch
.. parsed-literal::
:class: output
REPL.$JShell$154B$TrainingChapter11$LossTime@48ba05f3
Concise Implementation
----------------------
Since RMSProp is a rather popular algorithm it is also available in
``Optimizer``. We create an instance of ``RmsProp`` and set its learning
rate and optional ``gamma1`` parameter.
.. code:: java
Tracker lrt = Tracker.fixed(0.01f);
Optimizer rmsProp = Optimizer.rmsprop().optLearningRateTracker(lrt).optRho(0.9f).build();
TrainingChapter11.trainConciseCh11(rmsProp, airfoil, 2);
.. parsed-literal::
:class: output
INFO Training on: 1 GPUs.
INFO Load MXNet Engine Version 1.9.0 in 0.061 ms.
.. parsed-literal::
:class: output
Training: 100% |████████████████████████████████████████| Accuracy: 1.00, L2Loss: 0.28
loss: 0.243, 0.141 sec/epoch
Summary
-------
- RMSProp is very similar to Adagrad insofar as both use the square of
the gradient to scale coefficients.
- RMSProp shares with momentum the leaky averaging. However, RMSProp
uses the technique to adjust the coefficient-wise preconditioner.
- The learning rate needs to be scheduled by the experimenter in
practice.
- The coefficient :math:`\gamma` determines how long the history is
when adjusting the per-coordinate scale.
Exercises
---------
1. What happens experimentally if we set :math:`\gamma = 1`? Why?
2. Rotate the optimization problem to minimize
:math:`f(\mathbf{x}) = 0.1 (x_1 + x_2)^2 + 2 (x_1 - x_2)^2`. What
happens to the convergence?
3. Try out what happens to RMSProp on a real machine learning problem,
such as training on Fashion-MNIST. Experiment with different choices
for adjusting the learning rate.
4. Would you want to adjust :math:`\gamma` as optimization progresses?
How sensitive is RMSProp to this?