9.3. Deep Recurrent Neural Networks
Open the notebook in Colab
Open the notebook in Colab
Open the notebook in Colab
Open the notebook in SageMaker Studio Lab

Up to now, we only discussed RNNs with a single unidirectional hidden layer. In it the specific functional form of how latent variables and observations interact is rather arbitrary. This is not a big problem as long as we have enough flexibility to model different types of interactions. With a single layer, however, this can be quite challenging. In the case of the linear models, we fixed this problem by adding more layers. Within RNNs this is a bit trickier, since we first need to decide how and where to add extra nonlinearity.

In fact, we could stack multiple layers of RNNs on top of each other. This results in a flexible mechanism, due to the combination of several simple layers. In particular, data might be relevant at different levels of the stack. For instance, we might want to keep high-level data about financial market conditions (bear or bull market) available, whereas at a lower level we only record shorter-term temporal dynamics.

Beyond all the above abstract discussion it is probably easiest to understand the family of models we are interested in by reviewing Fig. 9.3.1. It describes a deep RNN with \(L\) hidden layers. Each hidden state is continuously passed to both the next time step of the current layer and the current time step of the next layer.

../_images/deep-rnn.svg

Fig. 9.3.1 Architecture of a deep RNN.

9.3.1. Functional Dependencies

We can formalize the functional dependencies within the deep architecture of \(L\) hidden layers depicted in Fig. 9.3.1. Our following discussion focuses primarily on the vanilla RNN model, but it applies to other sequence models, too.

Suppose that we have a minibatch input \(\mathbf{X}_t \in \mathbb{R}^{n \times d}\) (number of examples: \(n\), number of inputs in each example: \(d\)) at time step \(t\). At the same time step, let the hidden state of the \(l^\mathrm{th}\) hidden layer (\(l=1,\ldots,L\)) be \(\mathbf{H}_t^{(l)} \in \mathbb{R}^{n \times h}\) (number of hidden units: \(h\)) and the output layer variable be \(\mathbf{O}_t \in \mathbb{R}^{n \times q}\) (number of outputs: \(q\)). Setting \(\mathbf{H}_t^{(0)} = \mathbf{X}_t\), the hidden state of the \(l^\mathrm{th}\) hidden layer that uses the activation function \(\phi_l\) is expressed as follows:

(9.3.1)\[\mathbf{H}_t^{(l)} = \phi_l(\mathbf{H}_t^{(l-1)} \mathbf{W}_{xh}^{(l)} + \mathbf{H}_{t-1}^{(l)} \mathbf{W}_{hh}^{(l)} + \mathbf{b}_h^{(l)}),\]

where the weights \(\mathbf{W}_{xh}^{(l)} \in \mathbb{R}^{h \times h}\) and \(\mathbf{W}_{hh}^{(l)} \in \mathbb{R}^{h \times h}\), together with the bias \(\mathbf{b}_h^{(l)} \in \mathbb{R}^{1 \times h}\), are the model parameters of the \(l^\mathrm{th}\) hidden layer.

In the end, the calculation of the output layer is only based on the hidden state of the final \(L^\mathrm{th}\) hidden layer:

(9.3.2)\[\mathbf{O}_t = \mathbf{H}_t^{(L)} \mathbf{W}_{hq} + \mathbf{b}_q,\]

where the weight \(\mathbf{W}_{hq} \in \mathbb{R}^{h \times q}\) and the bias \(\mathbf{b}_q \in \mathbb{R}^{1 \times q}\) are the model parameters of the output layer.

Just as with MLPs, the number of hidden layers \(L\) and the number of hidden units \(h\) are hyperparameters. In other words, they can be tuned or specified by us. In addition, we can easily get a deep gated RNN by replacing the hidden state computation in (9.3.1) with that from a GRU or an LSTM.

9.3.2. Concise Implementation

Fortunately many of the logistical details required to implement multiple layers of an RNN are readily available in high-level APIs. To keep things simple we only illustrate the implementation using such built-in functionalities. Let us take an LSTM model as an example. The code is very similar to the one we used previously in Section 9.2. In fact, the only difference is that we specify the number of layers explicitly rather than picking the default of a single layer. As usual, we begin by loading the dataset.

from mxnet import npx
from mxnet.gluon import rnn
from d2l import mxnet as d2l

npx.set_np()

batch_size, num_steps = 32, 35
train_iter, vocab = d2l.load_data_time_machine(batch_size, num_steps)
import torch
from torch import nn
from d2l import torch as d2l

batch_size, num_steps = 32, 35
train_iter, vocab = d2l.load_data_time_machine(batch_size, num_steps)
import tensorflow as tf
from d2l import tensorflow as d2l

batch_size, num_steps = 32, 35
train_iter, vocab = d2l.load_data_time_machine(batch_size, num_steps)

The architectural decisions such as choosing hyperparameters are very similar to those of Section 9.2. We pick the same number of inputs and outputs as we have distinct tokens, i.e., vocab_size. The number of hidden units is still 256. The only difference is that we now select a nontrivial number of hidden layers by specifying the value of num_layers.

vocab_size, num_hiddens, num_layers = len(vocab), 256, 2
device = d2l.try_gpu()
lstm_layer = rnn.LSTM(num_hiddens, num_layers)
model = d2l.RNNModel(lstm_layer, len(vocab))
vocab_size, num_hiddens, num_layers = len(vocab), 256, 2
num_inputs = vocab_size
device = d2l.try_gpu()
lstm_layer = nn.LSTM(num_inputs, num_hiddens, num_layers)
model = d2l.RNNModel(lstm_layer, len(vocab))
model = model.to(device)
vocab_size, num_hiddens, num_layers = len(vocab), 256, 2
num_inputs = vocab_size
device_name = d2l.try_gpu()._device_name
strategy = tf.distribute.OneDeviceStrategy(device_name)
rnn_cells = [tf.keras.layers.LSTMCell(num_hiddens) for _ in range(num_layers)]
stacked_lstm = tf.keras.layers.StackedRNNCells(rnn_cells)
lstm_layer = tf.keras.layers.RNN(stacked_lstm, time_major=True,
                                 return_sequences=True, return_state=True)
with strategy.scope():
    model = d2l.RNNModel(lstm_layer, len(vocab))

9.3.3. Training and Prediction

Since now we instantiate two layers with the LSTM model, this rather more complex architecture slows down training considerably.

num_epochs, lr = 500, 2
d2l.train_ch8(model, train_iter, vocab, lr, num_epochs, device)
perplexity 1.1, 120927.4 tokens/sec on gpu(0)
time traveller with a slight accession ofcheerfulness really thi
travellerit would be remarkably convenient for the historia
../_images/output_deep-rnn_d70a11_27_1.svg
num_epochs, lr = 500, 2
d2l.train_ch8(model, train_iter, vocab, lr, num_epochs, device)
perplexity 1.0, 210022.3 tokens/sec on cuda:0
time traveller for so it will be convenient to speak of himwas e
travelleryou can show black is white by argument said filby
../_images/output_deep-rnn_d70a11_30_1.svg
num_epochs, lr = 500, 2
d2l.train_ch8(model, train_iter, vocab, lr, num_epochs, strategy)
perplexity 1.0, 4151.9 tokens/sec on /GPU:0
time travelleryou can show black is white by argument said filby
travelleryou can show black is white by argument said filby
../_images/output_deep-rnn_d70a11_33_1.svg

9.3.4. Summary

  • In deep RNNs, the hidden state information is passed to the next time step of the current layer and the current time step of the next layer.

  • There exist many different flavors of deep RNNs, such as LSTMs, GRUs, or vanilla RNNs. Conveniently these models are all available as parts of the high-level APIs of deep learning frameworks.

  • Initialization of models requires care. Overall, deep RNNs require considerable amount of work (such as learning rate and clipping) to ensure proper convergence.

9.3.5. Exercises

  1. Try to implement a two-layer RNN from scratch using the single layer implementation we discussed in Section 8.5.

  2. Replace the LSTM by a GRU and compare the accuracy and training speed.

  3. Increase the training data to include multiple books. How low can you go on the perplexity scale?

  4. Would you want to combine sources of different authors when modeling text? Why is this a good idea? What could go wrong?