Run this notebook online: or Colab:
9.1. Gated Recurrent Units (GRU)¶
In Section 8.7, we discussed how gradients are calculated in RNNs. In particular we found that long products of matrices can lead to vanishing or exploding gradients. Let us briefly think about what such gradient anomalies mean in practice:
We might encounter a situation where an early observation is highly significant for predicting all future observations. Consider the somewhat contrived case where the first observation contains a checksum and the goal is to discern whether the checksum is correct at the end of the sequence. In this case, the influence of the first token is vital. We would like to have some mechanisms for storing vital early information in a memory cell. Without such a mechanism, we will have to assign a very large gradient to this observation, since it affects all the subsequent observations.
We might encounter situations where some tokens carry no pertinent observation. For instance, when parsing a web page there might be auxiliary HTML code that is irrelevant for the purpose of assessing the sentiment conveyed on the page. We would like to have some mechanism for skipping such tokens in the latent state representation.
We might encounter situations where there is a logical break between parts of a sequence. For instance, there might be a transition between chapters in a book, or a transition between a bear and a bull market for securities. In this case it would be nice to have a means of resetting our internal state representation.
A number of methods have been proposed to address this. One of the earliest is long short-term memory [Hochreiter & Schmidhuber, 1997] which we will discuss in Section 9.2. The gated recurrent unit (GRU) [Cho et al., 2014] is a slightly more streamlined variant that often offers comparable performance and is significantly faster to compute [Chung et al., 2014]. Due to its simplicity, let us start with the GRU.
9.1.2. Implementation from Scratch¶
To gain a better understanding of the GRU model, let us implement it from scratch. We begin by reading the time machine dataset that we used in Section 8.5. The code for reading the dataset is given below.
%load ../utils/djl-imports
%load ../utils/plot-utils
%load ../utils/Functions.java
%load ../utils/PlotUtils.java
%load ../utils/StopWatch.java
%load ../utils/Accumulator.java
%load ../utils/Animator.java
%load ../utils/Training.java
%load ../utils/timemachine/Vocab.java
%load ../utils/timemachine/RNNModel.java
%load ../utils/timemachine/RNNModelScratch.java
%load ../utils/timemachine/TimeMachine.java
%load ../utils/timemachine/TimeMachineDataset.java
NDManager manager = NDManager.newBaseManager();
int batchSize = 32;
int numSteps = 35;
TimeMachineDataset dataset =
new TimeMachineDataset.Builder()
.setManager(manager)
.setMaxTokens(10000)
.setSampling(batchSize, false)
.setSteps(numSteps)
.build();
dataset.prepare();
Vocab vocab = dataset.getVocab();
9.1.2.1. Initializing Model Parameters¶
The next step is to initialize the model parameters. We draw the weights
from a Gaussian distribution with standard deviation to be 0.01 and set
the bias to 0. The hyperparameter num_hiddens
defines the number of
hidden units. We instantiate all weights and biases relating to the
update gate, the reset gate, the candidate hidden state, and the output
layer.
public static NDArray normal(Shape shape, Device device) {
return manager.randomNormal(0, 0.01f, shape, DataType.FLOAT32, device);
}
public static NDList three(int numInputs, int numHiddens, Device device) {
return new NDList(
normal(new Shape(numInputs, numHiddens), device),
normal(new Shape(numHiddens, numHiddens), device),
manager.zeros(new Shape(numHiddens), DataType.FLOAT32, device));
}
public static NDList getParams(int vocabSize, int numHiddens, Device device) {
int numInputs = vocabSize;
int numOutputs = vocabSize;
// Update gate parameters
NDList temp = three(numInputs, numHiddens, device);
NDArray W_xz = temp.get(0);
NDArray W_hz = temp.get(1);
NDArray b_z = temp.get(2);
// Reset gate parameters
temp = three(numInputs, numHiddens, device);
NDArray W_xr = temp.get(0);
NDArray W_hr = temp.get(1);
NDArray b_r = temp.get(2);
// Candidate hidden state parameters
temp = three(numInputs, numHiddens, device);
NDArray W_xh = temp.get(0);
NDArray W_hh = temp.get(1);
NDArray b_h = temp.get(2);
// Output layer parameters
NDArray W_hq = normal(new Shape(numHiddens, numOutputs), device);
NDArray b_q = manager.zeros(new Shape(numOutputs), DataType.FLOAT32, device);
// Attach gradients
NDList params = new NDList(W_xz, W_hz, b_z, W_xr, W_hr, b_r, W_xh, W_hh, b_h, W_hq, b_q);
for (NDArray param : params) {
param.setRequiresGradient(true);
}
return params;
}
9.1.2.2. Defining the Model¶
Now we will define the hidden state initialization function
init_gru_state
. Just like the init_rnn_state
function defined in
Section 8.5, this function returns a tensor with a shape
(batch size, number of hidden units) whose values are all zeros.
public static NDList initGruState(int batchSize, int numHiddens, Device device) {
return new NDList(manager.zeros(new Shape(batchSize, numHiddens), DataType.FLOAT32, device));
}
Now we are ready to define the GRU model. Its structure is the same as that of the basic RNN cell, except that the update equations are more complex.
public static Pair<NDArray, NDList> gru(NDArray inputs, NDList state, NDList params) {
NDArray W_xz = params.get(0);
NDArray W_hz = params.get(1);
NDArray b_z = params.get(2);
NDArray W_xr = params.get(3);
NDArray W_hr = params.get(4);
NDArray b_r = params.get(5);
NDArray W_xh = params.get(6);
NDArray W_hh = params.get(7);
NDArray b_h = params.get(8);
NDArray W_hq = params.get(9);
NDArray b_q = params.get(10);
NDArray H = state.get(0);
NDList outputs = new NDList();
NDArray X, Y, Z, R, H_tilda;
for (int i = 0; i < inputs.size(0); i++) {
X = inputs.get(i);
Z = Activation.sigmoid(X.dot(W_xz).add(H.dot(W_hz).add(b_z)));
R = Activation.sigmoid(X.dot(W_xr).add(H.dot(W_hr).add(b_r)));
H_tilda = Activation.tanh(X.dot(W_xh).add(R.mul(H).dot(W_hh).add(b_h)));
H = Z.mul(H).add(Z.mul(-1).add(1).mul(H_tilda));
Y = H.dot(W_hq).add(b_q);
outputs.add(Y);
}
return new Pair(outputs.size() > 1 ? NDArrays.concat(outputs) : outputs.get(0), new NDList(H));
}
9.1.2.3. Training and Prediction¶
Training and prediction work in exactly the same manner as in Section 8.5. After training, we print out the perplexity on the training set and the predicted sequence following the provided prefixes “time traveller” and “traveller”, respectively.
int vocabSize = vocab.length();
int numHiddens = 256;
Device device = manager.getDevice();
int numEpochs = Integer.getInteger("MAX_EPOCH", 500);
int lr = 1;
Functions.TriFunction<Integer, Integer, Device, NDList> getParamsFn = (a, b, c) -> getParams(a, b, c);
Functions.TriFunction<Integer, Integer, Device, NDList> initGruStateFn =
(a, b, c) -> initGruState(a, b, c);
Functions.TriFunction<NDArray, NDList, NDList, Pair<NDArray, NDList>> gruFn = (a, b, c) -> gru(a, b, c);
RNNModelScratch model =
new RNNModelScratch(vocabSize, numHiddens, device,
getParamsFn, initGruStateFn, gruFn);
TimeMachine.trainCh8(model, dataset, vocab, lr, numEpochs, device, false, manager);
perplexity: 1.1, 12699.6 tokens/sec on gpu(0)
time travellerit s against reason said filby an coursent s cont
travellericelthated and and which is our his usually pace a
9.1.3. Concise Implementation¶
In high-level APIs, we can directly instantiate a GPU model. This encapsulates all the configuration detail that we made explicit above. The code is significantly faster as it uses compiled operators rather than Python for many details that we spelled out before.
GRU gruLayer = GRU.builder().setNumLayers(1)
.setStateSize(numHiddens).optReturnState(true).optBatchFirst(false).build();
RNNModel modelConcise = new RNNModel(gruLayer,vocab.length());
TimeMachine.trainCh8(modelConcise, dataset, vocab, lr, numEpochs, device, false, manager);
INFO Training on: 1 GPUs.
INFO Load MXNet Engine Version 1.9.0 in 0.063 ms.
perplexity: 1.0, 82645.8 tokens/sec on gpu(0)
time traveller after the pauserequithy a cigurequilby wour dimen
traveller pecpeessed an saing beind on a smalloo so ig time
9.1.4. Summary¶
Gated RNNs can better capture dependencies for sequences with large time step distances.
Reset gates help capture short-term dependencies in sequences.
Update gates help capture long-term dependencies in sequences.
GRUs contain basic RNNs as their extreme case whenever the reset gate is switched on. They can also skip subsequences by turning on the update gate.
9.1.5. Exercises¶
Assume that we only want to use the input at time step \(t'\) to predict the output at time step \(t > t'\). What are the best values for the reset and update gates for each time step?
Adjust the hyperparameters and analyze the their influence on running time, perplexity, and the output sequence.
Compare runtime, perplexity, and the output strings for
rnn.RNN
andrnn.GRU
implementations with each other.What happens if you implement only parts of a GRU, e.g., with only a reset gate or only an update gate?