Custom Layers
=============
One factor behind deep learning's success is the availability of a wide
range of layers that can be composed in creative ways to design
architectures suitable for a wide variety of tasks. For instance,
researchers have invented layers specifically for handling images, text,
looping over sequential data, and performing dynamic programming. Sooner
or later, you will encounter or invent a layer that does not exist yet
in the deep learning framework. In these cases, you must build a custom
layer. In this section, we show you how.
Layers without Parameters
-------------------------
To start, we construct a custom layer that does not have any parameters
of its own. This should look familiar if you recall our introduction to
block in :numref:`sec_model_construction`. The following
``CenteredLayer`` class simply subtracts the mean from its input. To
build it, we simply need to inherit from the base layer class and
implement the forward propagation function.
.. raw:: html
.. raw:: html
.. raw:: latex
\diilbookstyleinputcell
.. code:: python
from mxnet import np, npx
from mxnet.gluon import nn
npx.set_np()
class CenteredLayer(nn.Block):
def __init__(self, **kwargs):
super().__init__(**kwargs)
def forward(self, X):
return X - X.mean()
.. raw:: html
.. raw:: html
.. raw:: latex
\diilbookstyleinputcell
.. code:: python
import torch
from torch import nn
from torch.nn import functional as F
class CenteredLayer(nn.Module):
def __init__(self):
super().__init__()
def forward(self, X):
return X - X.mean()
.. raw:: html
.. raw:: html
.. raw:: latex
\diilbookstyleinputcell
.. code:: python
import tensorflow as tf
class CenteredLayer(tf.keras.Model):
def __init__(self):
super().__init__()
def call(self, inputs):
return inputs - tf.reduce_mean(inputs)
.. raw:: html
.. raw:: html
Let us verify that our layer works as intended by feeding some data
through it.
.. raw:: html
.. raw:: html
.. raw:: latex
\diilbookstyleinputcell
.. code:: python
layer = CenteredLayer()
layer(np.array([1, 2, 3, 4, 5]))
.. raw:: latex
\diilbookstyleoutputcell
.. parsed-literal::
:class: output
array([-2., -1., 0., 1., 2.])
.. raw:: html
.. raw:: html
.. raw:: latex
\diilbookstyleinputcell
.. code:: python
layer = CenteredLayer()
layer(torch.FloatTensor([1, 2, 3, 4, 5]))
.. raw:: latex
\diilbookstyleoutputcell
.. parsed-literal::
:class: output
tensor([-2., -1., 0., 1., 2.])
.. raw:: html
.. raw:: html
.. raw:: latex
\diilbookstyleinputcell
.. code:: python
layer = CenteredLayer()
layer(tf.constant([1, 2, 3, 4, 5]))
.. raw:: latex
\diilbookstyleoutputcell
.. parsed-literal::
:class: output
.. raw:: html
.. raw:: html
We can now incorporate our layer as a component in constructing more
complex models.
.. raw:: html
.. raw:: html
.. raw:: latex
\diilbookstyleinputcell
.. code:: python
net = nn.Sequential()
net.add(nn.Dense(128), CenteredLayer())
net.initialize()
.. raw:: html
.. raw:: html
.. raw:: latex
\diilbookstyleinputcell
.. code:: python
net = nn.Sequential(nn.Linear(8, 128), CenteredLayer())
.. raw:: html
.. raw:: html
.. raw:: latex
\diilbookstyleinputcell
.. code:: python
net = tf.keras.Sequential([tf.keras.layers.Dense(128), CenteredLayer()])
.. raw:: html
.. raw:: html
As an extra sanity check, we can send random data through the network
and check that the mean is in fact 0. Because we are dealing with
floating point numbers, we may still see a very small nonzero number due
to quantization.
.. raw:: html
.. raw:: html
.. raw:: latex
\diilbookstyleinputcell
.. code:: python
Y = net(np.random.uniform(size=(4, 8)))
Y.mean()
.. raw:: latex
\diilbookstyleoutputcell
.. parsed-literal::
:class: output
array(3.783498e-10)
.. raw:: html
.. raw:: html
.. raw:: latex
\diilbookstyleinputcell
.. code:: python
Y = net(torch.rand(4, 8))
Y.mean()
.. raw:: latex
\diilbookstyleoutputcell
.. parsed-literal::
:class: output
tensor(9.3132e-10, grad_fn=)
.. raw:: html
.. raw:: html
.. raw:: latex
\diilbookstyleinputcell
.. code:: python
Y = net(tf.random.uniform((4, 8)))
tf.reduce_mean(Y)
.. raw:: latex
\diilbookstyleoutputcell
.. parsed-literal::
:class: output
.. raw:: html
.. raw:: html
Layers with Parameters
----------------------
Now that we know how to define simple layers, let us move on to defining
layers with parameters that can be adjusted through training. We can use
built-in functions to create parameters, which provide some basic
housekeeping functionality. In particular, they govern access,
initialization, sharing, saving, and loading model parameters. This way,
among other benefits, we will not need to write custom serialization
routines for every custom layer.
Now let us implement our own version of the fully-connected layer.
Recall that this layer requires two parameters, one to represent the
weight and the other for the bias. In this implementation, we bake in
the ReLU activation as a default. This layer requires to input
arguments: ``in_units`` and ``units``, which denote the number of inputs
and outputs, respectively.
.. raw:: html
.. raw:: html
.. raw:: latex
\diilbookstyleinputcell
.. code:: python
class MyDense(nn.Block):
def __init__(self, units, in_units, **kwargs):
super().__init__(**kwargs)
self.weight = self.params.get('weight', shape=(in_units, units))
self.bias = self.params.get('bias', shape=(units,))
def forward(self, x):
linear = np.dot(x, self.weight.data(ctx=x.ctx)) + self.bias.data(
ctx=x.ctx)
return npx.relu(linear)
Next, we instantiate the ``MyDense`` class and access its model
parameters.
.. raw:: latex
\diilbookstyleinputcell
.. code:: python
dense = MyDense(units=3, in_units=5)
dense.params
.. raw:: latex
\diilbookstyleoutputcell
.. parsed-literal::
:class: output
mydense0_ (
Parameter mydense0_weight (shape=(5, 3), dtype=)
Parameter mydense0_bias (shape=(3,), dtype=)
)
.. raw:: html
.. raw:: html
.. raw:: latex
\diilbookstyleinputcell
.. code:: python
class MyLinear(nn.Module):
def __init__(self, in_units, units):
super().__init__()
self.weight = nn.Parameter(torch.randn(in_units, units))
self.bias = nn.Parameter(torch.randn(units,))
def forward(self, X):
linear = torch.matmul(X, self.weight.data) + self.bias.data
return F.relu(linear)
Next, we instantiate the ``MyLinear`` class and access its model
parameters.
.. raw:: latex
\diilbookstyleinputcell
.. code:: python
linear = MyLinear(5, 3)
linear.weight
.. raw:: latex
\diilbookstyleoutputcell
.. parsed-literal::
:class: output
Parameter containing:
tensor([[-0.0032, 0.9315, 0.0951],
[-0.4086, 0.4896, 1.0422],
[ 2.0881, 0.3823, -0.0764],
[ 0.2432, -1.1701, 1.0910],
[ 0.0835, -1.0049, -0.0300]], requires_grad=True)
.. raw:: html
.. raw:: html
.. raw:: latex
\diilbookstyleinputcell
.. code:: python
class MyDense(tf.keras.Model):
def __init__(self, units):
super().__init__()
self.units = units
def build(self, X_shape):
self.weight = self.add_weight(name='weight',
shape=[X_shape[-1], self.units],
initializer=tf.random_normal_initializer())
self.bias = self.add_weight(
name='bias', shape=[self.units],
initializer=tf.zeros_initializer())
def call(self, X):
linear = tf.matmul(X, self.weight) + self.bias
return tf.nn.relu(linear)
Next, we instantiate the ``MyDense`` class and access its model
parameters.
.. raw:: latex
\diilbookstyleinputcell
.. code:: python
dense = MyDense(3)
dense(tf.random.uniform((2, 5)))
dense.get_weights()
.. raw:: latex
\diilbookstyleoutputcell
.. parsed-literal::
:class: output
[array([[ 0.04395126, -0.06615613, 0.01170311],
[ 0.05419813, 0.02635321, -0.04684253],
[-0.07571497, 0.08179992, -0.00349963],
[ 0.09277276, 0.05270632, 0.04224372],
[ 0.04428378, -0.04567032, -0.00297161]], dtype=float32),
array([0., 0., 0.], dtype=float32)]
.. raw:: html
.. raw:: html
We can directly carry out forward propagation calculations using custom
layers.
.. raw:: html
.. raw:: html
.. raw:: latex
\diilbookstyleinputcell
.. code:: python
dense.initialize()
dense(np.random.uniform(size=(2, 5)))
.. raw:: latex
\diilbookstyleoutputcell
.. parsed-literal::
:class: output
array([[0. , 0.01633355, 0. ],
[0. , 0.01581812, 0. ]])
.. raw:: html
.. raw:: html
.. raw:: latex
\diilbookstyleinputcell
.. code:: python
linear(torch.rand(2, 5))
.. raw:: latex
\diilbookstyleoutputcell
.. parsed-literal::
:class: output
tensor([[2.6824, 0.0000, 0.0000],
[2.0101, 0.1381, 0.4309]])
.. raw:: html
.. raw:: html
.. raw:: latex
\diilbookstyleinputcell
.. code:: python
dense(tf.random.uniform((2, 5)))
.. raw:: latex
\diilbookstyleoutputcell
.. parsed-literal::
:class: output
.. raw:: html
.. raw:: html
We can also construct models using custom layers. Once we have that we
can use it just like the built-in fully-connected layer.
.. raw:: html
.. raw:: html
.. raw:: latex
\diilbookstyleinputcell
.. code:: python
net = nn.Sequential()
net.add(MyDense(8, in_units=64),
MyDense(1, in_units=8))
net.initialize()
net(np.random.uniform(size=(2, 64)))
.. raw:: latex
\diilbookstyleoutputcell
.. parsed-literal::
:class: output
array([[0.06508517],
[0.0615553 ]])
.. raw:: html
.. raw:: html
.. raw:: latex
\diilbookstyleinputcell
.. code:: python
net = nn.Sequential(MyLinear(64, 8), MyLinear(8, 1))
net(torch.rand(2, 64))
.. raw:: latex
\diilbookstyleoutputcell
.. parsed-literal::
:class: output
tensor([[10.6553],
[11.9528]])
.. raw:: html
.. raw:: html
.. raw:: latex
\diilbookstyleinputcell
.. code:: python
net = tf.keras.models.Sequential([MyDense(8), MyDense(1)])
net(tf.random.uniform((2, 64)))
.. raw:: latex
\diilbookstyleoutputcell
.. parsed-literal::
:class: output
.. raw:: html
.. raw:: html
Summary
-------
- We can design custom layers via the basic layer class. This allows us
to define flexible new layers that behave differently from any
existing layers in the library.
- Once defined, custom layers can be invoked in arbitrary contexts and
architectures.
- Layers can have local parameters, which can be created through
built-in functions.
Exercises
---------
1. Design a layer that takes an input and computes a tensor reduction,
i.e., it returns :math:`y_k = \sum_{i, j} W_{ijk} x_i x_j`.
2. Design a layer that returns the leading half of the Fourier
coefficients of the data.
.. raw:: html
.. raw:: html
`Discussions `__
.. raw:: html
.. raw:: html
`Discussions `__
.. raw:: html
.. raw:: html
`Discussions `__
.. raw:: html
.. raw:: html