8.6. Concise Implementation of Recurrent Neural Networks¶ Open the notebook in SageMaker Studio Lab
While Section 8.5 was instructive to see how RNNs are implemented, this is not convenient or fast. This section will show how to implement the same language model more efficiently using functions provided by high-level APIs of a deep learning framework. We begin as before by reading the time machine dataset.
from mxnet import np, npx
from mxnet.gluon import nn, rnn
from d2l import mxnet as d2l
npx.set_np()
batch_size, num_steps = 32, 35
train_iter, vocab = d2l.load_data_time_machine(batch_size, num_steps)
import torch
from torch import nn
from torch.nn import functional as F
from d2l import torch as d2l
batch_size, num_steps = 32, 35
train_iter, vocab = d2l.load_data_time_machine(batch_size, num_steps)
import tensorflow as tf
from d2l import tensorflow as d2l
batch_size, num_steps = 32, 35
train_iter, vocab = d2l.load_data_time_machine(batch_size, num_steps)
8.6.1. Defining the Model¶
High-level APIs provide implementations of recurrent neural networks. We
construct the recurrent neural network layer rnn_layer
with a single
hidden layer and 256 hidden units. In fact, we have not even discussed
yet what it means to have multiple layers—this will happen in
Section 9.3. For now, suffice it to say that multiple
layers simply amount to the output of one layer of RNN being used as the
input for the next layer of RNN.
num_hiddens = 256
rnn_layer = rnn.RNN(num_hiddens)
rnn_layer.initialize()
Initializing the hidden state is straightforward. We invoke the member
function begin_state
. This returns a list (state
) that contains
an initial hidden state for each example in the minibatch, whose shape
is (number of hidden layers, batch size, number of hidden units). For
some models to be introduced later (e.g., long short-term memory), such
a list also contains other information.
state = rnn_layer.begin_state(batch_size=batch_size)
len(state), state[0].shape
(1, (1, 32, 256))
num_hiddens = 256
rnn_layer = nn.RNN(len(vocab), num_hiddens)
We use a tensor to initialize the hidden state, whose shape is (number of hidden layers, batch size, number of hidden units).
state = torch.zeros((1, batch_size, num_hiddens))
state.shape
torch.Size([1, 32, 256])
num_hiddens = 256
rnn_cell = tf.keras.layers.SimpleRNNCell(num_hiddens,
kernel_initializer='glorot_uniform')
rnn_layer = tf.keras.layers.RNN(rnn_cell, time_major=True,
return_sequences=True, return_state=True)
state = rnn_cell.get_initial_state(batch_size=batch_size, dtype=tf.float32)
state.shape
TensorShape([32, 256])
With a hidden state and an input, we can compute the output with the
updated hidden state. It should be emphasized that the “output” (Y
)
of rnn_layer
does not involve computation of output layers: it
refers to the hidden state at each time step, and they can be used as
the input to the subsequent output layer.
Besides, the updated hidden state (state_new
) returned by
rnn_layer
refers to the hidden state at the last time step of the
minibatch. It can be used to initialize the hidden state for the next
minibatch within an epoch in sequential partitioning. For multiple
hidden layers, the hidden state of each layer will be stored in this
variable (state_new
). For some models to be introduced later (e.g.,
long short-term memory), this variable also contains other information.
X = np.random.uniform(size=(num_steps, batch_size, len(vocab)))
Y, state_new = rnn_layer(X, state)
Y.shape, len(state_new), state_new[0].shape
((35, 32, 256), 1, (1, 32, 256))
X = torch.rand(size=(num_steps, batch_size, len(vocab)))
Y, state_new = rnn_layer(X, state)
Y.shape, state_new.shape
(torch.Size([35, 32, 256]), torch.Size([1, 32, 256]))
X = tf.random.uniform((num_steps, batch_size, len(vocab)))
Y, state_new = rnn_layer(X, state)
Y.shape, len(state_new), state_new[0].shape
(TensorShape([35, 32, 256]), 32, TensorShape([256]))
Similar to Section 8.5, we define an RNNModel
class
for a complete RNN model. Note that rnn_layer
only contains the
hidden recurrent layers, we need to create a separate output layer.
#@save
class RNNModel(nn.Block):
"""The RNN model."""
def __init__(self, rnn_layer, vocab_size, **kwargs):
super(RNNModel, self).__init__(**kwargs)
self.rnn = rnn_layer
self.vocab_size = vocab_size
self.dense = nn.Dense(vocab_size)
def forward(self, inputs, state):
X = npx.one_hot(inputs.T, self.vocab_size)
Y, state = self.rnn(X, state)
# The fully-connected layer will first change the shape of `Y` to
# (`num_steps` * `batch_size`, `num_hiddens`). Its output shape is
# (`num_steps` * `batch_size`, `vocab_size`).
output = self.dense(Y.reshape(-1, Y.shape[-1]))
return output, state
def begin_state(self, *args, **kwargs):
return self.rnn.begin_state(*args, **kwargs)
#@save
class RNNModel(nn.Module):
"""The RNN model."""
def __init__(self, rnn_layer, vocab_size, **kwargs):
super(RNNModel, self).__init__(**kwargs)
self.rnn = rnn_layer
self.vocab_size = vocab_size
self.num_hiddens = self.rnn.hidden_size
# If the RNN is bidirectional (to be introduced later),
# `num_directions` should be 2, else it should be 1.
if not self.rnn.bidirectional:
self.num_directions = 1
self.linear = nn.Linear(self.num_hiddens, self.vocab_size)
else:
self.num_directions = 2
self.linear = nn.Linear(self.num_hiddens * 2, self.vocab_size)
def forward(self, inputs, state):
X = F.one_hot(inputs.T.long(), self.vocab_size)
X = X.to(torch.float32)
Y, state = self.rnn(X, state)
# The fully connected layer will first change the shape of `Y` to
# (`num_steps` * `batch_size`, `num_hiddens`). Its output shape is
# (`num_steps` * `batch_size`, `vocab_size`).
output = self.linear(Y.reshape((-1, Y.shape[-1])))
return output, state
def begin_state(self, device, batch_size=1):
if not isinstance(self.rnn, nn.LSTM):
# `nn.GRU` takes a tensor as hidden state
return torch.zeros((self.num_directions * self.rnn.num_layers,
batch_size, self.num_hiddens),
device=device)
else:
# `nn.LSTM` takes a tuple of hidden states
return (torch.zeros((
self.num_directions * self.rnn.num_layers,
batch_size, self.num_hiddens), device=device),
torch.zeros((
self.num_directions * self.rnn.num_layers,
batch_size, self.num_hiddens), device=device))
#@save
class RNNModel(tf.keras.layers.Layer):
def __init__(self, rnn_layer, vocab_size, **kwargs):
super(RNNModel, self).__init__(**kwargs)
self.rnn = rnn_layer
self.vocab_size = vocab_size
self.dense = tf.keras.layers.Dense(vocab_size)
def call(self, inputs, state):
X = tf.one_hot(tf.transpose(inputs), self.vocab_size)
# Later RNN like `tf.keras.layers.LSTMCell` return more than two values
Y, *state = self.rnn(X, state)
output = self.dense(tf.reshape(Y, (-1, Y.shape[-1])))
return output, state
def begin_state(self, *args, **kwargs):
return self.rnn.cell.get_initial_state(*args, **kwargs)
8.6.2. Training and Predicting¶
Before training the model, let us make a prediction with the a model that has random weights.
device = d2l.try_gpu()
net = RNNModel(rnn_layer, len(vocab))
net.initialize(force_reinit=True, ctx=device)
d2l.predict_ch8('time traveller', 10, net, vocab, device)
'time travellervmoopwrrrr'
device = d2l.try_gpu()
net = RNNModel(rnn_layer, vocab_size=len(vocab))
net = net.to(device)
d2l.predict_ch8('time traveller', 10, net, vocab, device)
'time travellerllllllllll'
device_name = d2l.try_gpu()._device_name
strategy = tf.distribute.OneDeviceStrategy(device_name)
with strategy.scope():
net = RNNModel(rnn_layer, vocab_size=len(vocab))
d2l.predict_ch8('time traveller', 10, net, vocab)
'time travellerdbmo<unk>afoam'
As is quite obvious, this model does not work at all. Next, we call
train_ch8
with the same hyperparameters defined in
Section 8.5 and train our model with high-level APIs.
num_epochs, lr = 500, 1
d2l.train_ch8(net, train_iter, vocab, lr, num_epochs, device)
perplexity 1.2, 145303.3 tokens/sec on gpu(0)
time traveller for so it will be convenient to sperk of my insoa
traveller held in his hand was a glittering this began to r
num_epochs, lr = 500, 1
d2l.train_ch8(net, train_iter, vocab, lr, num_epochs, device)
perplexity 1.3, 279664.0 tokens/sec on cuda:0
time traveller but now you begin to seethe object of my investig
travellerey thind to move ato the engron th you mere heregh
num_epochs, lr = 500, 1
d2l.train_ch8(net, train_iter, vocab, lr, num_epochs, strategy)
perplexity 1.4, 15631.2 tokens/sec on /GPU:0
time travellerit wo th y s callont fowhow scane of thedimensions
travellerthere whac orecrmereee said the time travellerit
Compared with the last section, this model achieves comparable perplexity, albeit within a shorter period of time, due to the code being more optimized by high-level APIs of the deep learning framework.
8.6.3. Summary¶
High-level APIs of the deep learning framework provides an implementation of the RNN layer.
The RNN layer of high-level APIs returns an output and an updated hidden state, where the output does not involve output layer computation.
Using high-level APIs leads to faster RNN training than using its implementation from scratch.
8.6.4. Exercises¶
Can you make the RNN model overfit using the high-level APIs?
What happens if you increase the number of hidden layers in the RNN model? Can you make the model work?
Implement the autoregressive model of Section 8.1 using an RNN.