Run this notebook online: or Colab:
9.3. Deep Recurrent Neural Networks¶
Up to now, we only discussed RNNs with a single unidirectional hidden layer. In it the specific functional form of how latent variables and observations interact is rather arbitrary. This is not a big problem as long as we have enough flexibility to model different types of interactions. With a single layer, however, this can be quite challenging. In the case of the linear models, we fixed this problem by adding more layers. Within RNNs this is a bit trickier, since we first need to decide how and where to add extra nonlinearity.
In fact, we could stack multiple layers of RNNs on top of each other. This results in a flexible mechanism, due to the combination of several simple layers. In particular, data might be relevant at different levels of the stack. For instance, we might want to keep high-level data about financial market conditions (bear or bull market) available, whereas at a lower level we only record shorter-term temporal dynamics.
Beyond all the above abstract discussion it is probably easiest to
understand the family of models we are interested in by reviewing
fig_deep_rnn
. It describes a deep RNN with \(L\) hidden
layers. Each hidden state is continuously passed to both the next time
step of the current layer and the current time step of the next layer.
.. _fig_deep_rnn:
9.3.1. Functional Dependencies¶
We can formalize the functional dependencies within the deep
architecture of \(L\) hidden layers depicted in
fig_deep_rnn
. Our following discussion focuses primarily on
the vanilla RNN model, but it applies to other sequence models, too.
Suppose that we have a minibatch input \(\mathbf{X}_t \in \mathbb{R}^{n \times d}\) (number of examples: \(n\), number of inputs in each example: \(d\)) at time step \(t\). At the same time step, let the hidden state of the \(l^\mathrm{th}\) hidden layer (\(l=1,\ldots,L\)) be \(\mathbf{H}_t^{(l)} \in \mathbb{R}^{n \times h}\) (number of hidden units: \(h\)) and the output layer variable be \(\mathbf{O}_t \in \mathbb{R}^{n \times q}\) (number of outputs: \(q\)). Setting \(\mathbf{H}_t^{(0)} = \mathbf{X}_t\), the hidden state of the \(l^\mathrm{th}\) hidden layer that uses the activation function \(\phi_l\) is expressed as follows:
where the weights \(\mathbf{W}_{xh}^{(l)} \in \mathbb{R}^{h \times h}\) and \(\mathbf{W}_{hh}^{(l)} \in \mathbb{R}^{h \times h}\), together with the bias \(\mathbf{b}_h^{(l)} \in \mathbb{R}^{1 \times h}\), are the model parameters of the \(l^\mathrm{th}\) hidden layer.
In the end, the calculation of the output layer is only based on the hidden state of the final \(L^\mathrm{th}\) hidden layer:
where the weight \(\mathbf{W}_{hq} \in \mathbb{R}^{h \times q}\) and the bias \(\mathbf{b}_q \in \mathbb{R}^{1 \times q}\) are the model parameters of the output layer.
Just as with MLPs, the number of hidden layers \(L\) and the number of hidden units \(h\) are hyperparameters. In other words, they can be tuned or specified by us. In addition, we can easily get a deep gated RNN by replacing the hidden state computation in (9.3.1) with that from a GRU or an LSTM.
9.3.2. Concise Implementation¶
Fortunately many of the logistical details required to implement multiple layers of an RNN are readily available in high-level APIs. To keep things simple we only illustrate the implementation using such built-in functionalities. Let us take an LSTM model as an example. The code is very similar to the one we used previously in Section 9.2. In fact, the only difference is that we specify the number of layers explicitly rather than picking the default of a single layer. As usual, we begin by loading the dataset.
%load ../utils/djl-imports
%load ../utils/plot-utils
%load ../utils/Functions.java
%load ../utils/PlotUtils.java
%load ../utils/StopWatch.java
%load ../utils/Accumulator.java
%load ../utils/Animator.java
%load ../utils/Training.java
%load ../utils/timemachine/Vocab.java
%load ../utils/timemachine/RNNModel.java
%load ../utils/timemachine/RNNModelScratch.java
%load ../utils/timemachine/TimeMachine.java
%load ../utils/timemachine/TimeMachineDataset.java
NDManager manager = NDManager.newBaseManager();
int batchSize = 32;
int numSteps = 35;
TimeMachineDataset dataset = new TimeMachineDataset.Builder()
.setManager(manager)
.setMaxTokens(10000)
.setSampling(batchSize, false)
.setSteps(numSteps)
.build();
dataset.prepare();
Vocab vocab = dataset.getVocab();
The architectural decisions such as choosing hyperparameters are very
similar to those of Section 9.2. We pick the same number of
inputs and outputs as we have distinct tokens, i.e., vocabSize
. The
number of hidden units is still 256. The only difference is that we now
select a nontrivial number of hidden layers by specifying the value of
numLayers
.
int vocabSize = vocab.length();
int numHiddens = 256;
int numLayers = 2;
Device device = manager.getDevice();
LSTM lstmLayer =
LSTM.builder()
.setNumLayers(numLayers)
.setStateSize(numHiddens)
.optReturnState(true)
.optBatchFirst(false)
.build();
RNNModel model = new RNNModel(lstmLayer, vocabSize);
9.3.3. Training and Prediction¶
Since now we instantiate two layers with the LSTM model, this rather more complex architecture slows down training considerably.
int numEpochs = Integer.getInteger("MAX_EPOCH", 500);
int lr = 2;
TimeMachine.trainCh8(model, dataset, vocab, lr, numEpochs, device, false, manager);
INFO Training on: 1 GPUs.
INFO Load MXNet Engine Version 1.9.0 in 0.085 ms.
perplexity: 1.0, 61496.0 tokens/sec on gpu(0)
time traveller wolld he rour at we canting as wore arother direc
travellereathe had ag a mome that beeal of the fourth dimen
9.3.4. Summary¶
In deep RNNs, the hidden state information is passed to the next time step of the current layer and the current time step of the next layer.
There exist many different flavors of deep RNNs, such as LSTMs, GRUs, or vanilla RNNs. Conveniently these models are all available as parts of the high-level APIs of deep learning frameworks.
Initialization of models requires care. Overall, deep RNNs require considerable amount of work (such as learning rate and clipping) to ensure proper convergence.
9.3.5. Exercises¶
Try to implement a two-layer RNN from scratch using the single layer implementation we discussed in Section 8.5.
Replace the LSTM by a GRU and compare the accuracy and training speed.
Increase the training data to include multiple books. How low can you go on the perplexity scale?
Would you want to combine sources of different authors when modeling text? Why is this a good idea? What could go wrong?