Run this notebook online:\ |Binder| or Colab: |Colab| .. |Binder| image:: https://mybinder.org/badge_logo.svg :target: https://mybinder.org/v2/gh/deepjavalibrary/d2l-java/master?filepath=chapter_optimization/rmsprop.ipynb .. |Colab| image:: https://colab.research.google.com/assets/colab-badge.svg :target: https://colab.research.google.com/github/deepjavalibrary/d2l-java/blob/colab/chapter_optimization/rmsprop.ipynb .. _sec_rmsprop: RMSProp ======= One of the key issues in :numref:`sec_adagrad` is that the learning rate decreases at a predefined schedule of effectively :math:`\mathcal{O}(t^{-\frac{1}{2}})`. While this is generally appropriate for convex problems, it might not be ideal for nonconvex ones, such as those encountered in deep learning. Yet, the coordinate-wise adaptivity of Adagrad is highly desirable as a preconditioner. :cite:`Tieleman.Hinton.2012` proposed the RMSProp algorithm as a simple fix to decouple rate scheduling from coordinate-adaptive learning rates. The issue is that Adagrad accumulates the squares of the gradient :math:`\mathbf{g}_t` into a state vector :math:`\mathbf{s}_t = \mathbf{s}_{t-1} + \mathbf{g}_t^2`. As a result :math:`\mathbf{s}_t` keeps on growing without bound due to the lack of normalization, essentially linarly as the algorithm converges. One way of fixing this problem would be to use :math:`\mathbf{s}_t / t`. For reasonable distributions of :math:`\mathbf{g}_t` this will converge. Unfortunately it might take a very long time until the limit behavior starts to matter since the procedure remembers the full trajectory of values. An alternative is to use a leaky average in the same way we used in the momentum method, i.e., :math:`\mathbf{s}_t \leftarrow \gamma \mathbf{s}_{t-1} + (1-\gamma) \mathbf{g}_t^2` for some parameter :math:`\gamma > 0`. Keeping all other parts unchanged yields RMSProp. The Algorithm ------------- Let us write out the equations in detail. .. math:: \begin{aligned} \mathbf{s}_t & \leftarrow \gamma \mathbf{s}_{t-1} + (1 - \gamma) \mathbf{g}_t^2, \\ \mathbf{x}_t & \leftarrow \mathbf{x}_{t-1} - \frac{\eta}{\sqrt{\mathbf{s}_t + \epsilon}} \odot \mathbf{g}_t. \end{aligned} The constant :math:`\epsilon > 0` is typically set to :math:`10^{-6}` to ensure that we do not suffer from division by zero or overly large step sizes. Given this expansion we are now free to control the learning rate :math:`\eta` independently of the scaling that is applied on a per-coordinate basis. In terms of leaky averages we can apply the same reasoning as previously applied in the case of the momentum method. Expanding the definition of :math:`\mathbf{s}_t` yields .. math:: \begin{aligned} \mathbf{s}_t & = (1 - \gamma) \mathbf{g}_t^2 + \gamma \mathbf{s}_{t-1} \\ & = (1 - \gamma) \left(\mathbf{g}_t^2 + \gamma \mathbf{g}_{t-1}^2 + \gamma^2 \mathbf{g}_{t-2} + \ldots, \right). \end{aligned} As before in :numref:`sec_momentum` we use :math:`1 + \gamma + \gamma^2 + \ldots, = \frac{1}{1-\gamma}`. Hence the sum of weights is normalized to :math:`1` with a half-life time of an observation of :math:`\gamma^{-1}`. Let us visualize the weights for the past 40 timesteps for various choices of :math:`\gamma`. .. code:: java %load ../utils/djl-imports %load ../utils/plot-utils %load ../utils/Functions.java %load ../utils/GradDescUtils.java %load ../utils/Accumulator.java %load ../utils/StopWatch.java %load ../utils/Training.java %load ../utils/TrainingChapter11.java .. code:: java NDManager manager = NDManager.newBaseManager(); float[] gammas = new float[]{0.95f, 0.9f, 0.8f, 0.7f}; NDArray timesND = manager.arange(40f); float[] times = timesND.toFloatArray(); display(GradDescUtils.plotGammas(times, gammas, 600, 400)); .. raw:: html
.. parsed-literal:: :class: output ff48d888-7a68-4140-a67a-030f7b920c45 Implementation from Scratch --------------------------- As before we use the quadratic function :math:`f(\mathbf{x})=0.1x_1^2+2x_2^2` to observe the trajectory of RMSProp. Recall that in :numref:`sec_adagrad`, when we used Adagrad with a learning rate of 0.4, the variables moved only very slowly in the later stages of the algorithm since the learning rate decreased too quickly. Since :math:`\eta` is controlled separately this does not happen with RMSProp. .. code:: java float eta = 0.4f; float gamma = 0.9f; Function