# 14.4. Pretraining word2vec¶ Open the notebook in Colab Open the notebook in Colab Open the notebook in Colab Open the notebook in SageMaker Studio Lab

We go on to implement the skip-gram model defined in Section 14.1. Then we will pretrain word2vec using negative sampling on the PTB dataset. First of all, let us obtain the data iterator and the vocabulary for this dataset by calling the d2l.load_data_ptb function, which was described in Section 14.3

import math
from mxnet import autograd, gluon, np, npx
from mxnet.gluon import nn
from d2l import mxnet as d2l

npx.set_np()

batch_size, max_window_size, num_noise_words = 512, 5, 5
num_noise_words)

import math
import torch
from torch import nn
from d2l import torch as d2l

batch_size, max_window_size, num_noise_words = 512, 5, 5
num_noise_words)


## 14.4.1. The Skip-Gram Model¶

We implement the skip-gram model by using embedding layers and batch matrix multiplications. First, let us review how embedding layers work.

### 14.4.1.1. Embedding Layer¶

As described in Section 9.7, an embedding layer maps a token’s index to its feature vector. The weight of this layer is a matrix whose number of rows equals to the dictionary size (input_dim) and number of columns equals to the vector dimension for each token (output_dim). After a word embedding model is trained, this weight is what we need.

embed = nn.Embedding(input_dim=20, output_dim=4)
embed.initialize()
embed.weight

Parameter embedding0_weight (shape=(20, 4), dtype=float32)

embed = nn.Embedding(num_embeddings=20, embedding_dim=4)
print(f'Parameter embedding_weight ({embed.weight.shape}, '
f'dtype={embed.weight.dtype})')

Parameter embedding_weight (torch.Size([20, 4]), dtype=torch.float32)


The input of an embedding layer is the index of a token (word). For any token index $$i$$, its vector representation can be obtained from the $$i^\mathrm{th}$$ row of the weight matrix in the embedding layer. Since the vector dimension (output_dim) was set to 4, the embedding layer returns vectors with shape (2, 3, 4) for a minibatch of token indices with shape (2, 3).

x = np.array([[1, 2, 3], [4, 5, 6]])
embed(x)

array([[[ 0.01438687,  0.05011239,  0.00628365,  0.04861524],
[-0.01068833,  0.01729892,  0.02042518, -0.01618656],
[-0.00873779, -0.02834515,  0.05484822, -0.06206018]],

[[ 0.06491279, -0.03182812, -0.01631819, -0.00312688],
[ 0.0408415 ,  0.04370362,  0.00404529, -0.0028032 ],
[ 0.00952624, -0.01501013,  0.05958354,  0.04705103]]])

x = torch.tensor([[1, 2, 3], [4, 5, 6]])
embed(x)

tensor([[[ 0.2805,  1.3010,  0.7198, -0.5903],
[-1.1867, -0.5929, -1.0576, -1.8864],
[ 0.3474, -1.7009, -1.5279, -0.4445]],

[[ 0.8890,  0.6734,  0.5100, -0.6860],
[-1.6209,  1.0484, -0.1768,  1.8924],
[ 0.7960, -0.1964,  0.2314,  1.6943]]], grad_fn=<EmbeddingBackward0>)


### 14.4.1.2. Defining the Forward Propagation¶

In the forward propagation, the input of the skip-gram model includes the center word indices center of shape (batch size, 1) and the concatenated context and noise word indices contexts_and_negatives of shape (batch size, max_len), where max_len is defined in Section 14.3.5. These two variables are first transformed from the token indices into vectors via the embedding layer, then their batch matrix multiplication (described in Section 10.2.4.1) returns an output of shape (batch size, 1, max_len). Each element in the output is the dot product of a center word vector and a context or noise word vector.

def skip_gram(center, contexts_and_negatives, embed_v, embed_u):
v = embed_v(center)
u = embed_u(contexts_and_negatives)
pred = npx.batch_dot(v, u.swapaxes(1, 2))
return pred

def skip_gram(center, contexts_and_negatives, embed_v, embed_u):
v = embed_v(center)
u = embed_u(contexts_and_negatives)
pred = torch.bmm(v, u.permute(0, 2, 1))
return pred


Let us print the output shape of this skip_gram function for some example inputs.

skip_gram(np.ones((2, 1)), np.ones((2, 4)), embed, embed).shape

(2, 1, 4)

skip_gram(torch.ones((2, 1), dtype=torch.long),
torch.ones((2, 4), dtype=torch.long), embed, embed).shape

torch.Size([2, 1, 4])


## 14.4.2. Training¶

Before training the skip-gram model with negative sampling, let us first define its loss function.

### 14.4.2.1. Binary Cross-Entropy Loss¶

According to the definition of the loss function for negative sampling in Section 14.2.1, we will use the binary cross-entropy loss.

loss = gluon.loss.SigmoidBCELoss()

class SigmoidBCELoss(nn.Module):
# Binary cross-entropy loss with masking
def __init__(self):
super().__init__()

out = nn.functional.binary_cross_entropy_with_logits(
return out.mean(dim=1)

loss = SigmoidBCELoss()


Recall our descriptions of the mask variable and the label variable in Section 14.3.5. The following calculates the binary cross-entropy loss for the given variables.

pred = np.array([[1.1, -2.2, 3.3, -4.4]] * 2)
label = np.array([[1.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0]])
mask = np.array([[1, 1, 1, 1], [1, 1, 0, 0]])

array([0.93521017, 1.8462094 ])

pred = torch.tensor([[1.1, -2.2, 3.3, -4.4]] * 2)
label = torch.tensor([[1.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0]])
mask = torch.tensor([[1, 1, 1, 1], [1, 1, 0, 0]])

tensor([0.9352, 1.8462])


Below shows how the above results are calculated (in a less efficient way) using the sigmoid activation function in the binary cross-entropy loss. We can consider the two outputs as two normalized losses that are averaged over non-masked predictions.

def sigmd(x):
return -math.log(1 / (1 + math.exp(-x)))

print(f'{(sigmd(1.1) + sigmd(2.2) + sigmd(-3.3) + sigmd(4.4)) / 4:.4f}')
print(f'{(sigmd(-1.1) + sigmd(-2.2)) / 2:.4f}')

0.9352
1.8462

def sigmd(x):
return -math.log(1 / (1 + math.exp(-x)))

print(f'{(sigmd(1.1) + sigmd(2.2) + sigmd(-3.3) + sigmd(4.4)) / 4:.4f}')
print(f'{(sigmd(-1.1) + sigmd(-2.2)) / 2:.4f}')

0.9352
1.8462


### 14.4.2.2. Initializing Model Parameters¶

We define two embedding layers for all the words in the vocabulary when they are used as center words and context words, respectively. The word vector dimension embed_size is set to 100.

embed_size = 100
net = nn.Sequential()
nn.Embedding(input_dim=len(vocab), output_dim=embed_size))

embed_size = 100
net = nn.Sequential(nn.Embedding(num_embeddings=len(vocab),
embedding_dim=embed_size),
nn.Embedding(num_embeddings=len(vocab),
embedding_dim=embed_size))


### 14.4.2.3. Defining the Training Loop¶

The training loop is defined below. Because of the existence of padding, the calculation of the loss function is slightly different compared to the previous training functions.

def train(net, data_iter, lr, num_epochs, device=d2l.try_gpu()):
net.initialize(ctx=device, force_reinit=True)
{'learning_rate': lr})
animator = d2l.Animator(xlabel='epoch', ylabel='loss',
xlim=[1, num_epochs])
# Sum of normalized losses, no. of normalized losses
metric = d2l.Accumulator(2)
for epoch in range(num_epochs):
timer, num_batches = d2l.Timer(), len(data_iter)
for i, batch in enumerate(data_iter):
center, context_negative, mask, label = [
data.as_in_ctx(device) for data in batch]
pred = skip_gram(center, context_negative, net[0], net[1])
l = (loss(pred.reshape(label.shape), label, mask) *
l.backward()
trainer.step(batch_size)
if (i + 1) % (num_batches // 5) == 0 or i == num_batches - 1:
animator.add(epoch + (i + 1) / num_batches,
(metric[0] / metric[1],))
print(f'loss {metric[0] / metric[1]:.3f}, '
f'{metric[1] / timer.stop():.1f} tokens/sec on {str(device)}')

def train(net, data_iter, lr, num_epochs, device=d2l.try_gpu()):
def init_weights(m):
if type(m) == nn.Embedding:
nn.init.xavier_uniform_(m.weight)
net.apply(init_weights)
net = net.to(device)
animator = d2l.Animator(xlabel='epoch', ylabel='loss',
xlim=[1, num_epochs])
# Sum of normalized losses, no. of normalized losses
metric = d2l.Accumulator(2)
for epoch in range(num_epochs):
timer, num_batches = d2l.Timer(), len(data_iter)
for i, batch in enumerate(data_iter):
center, context_negative, mask, label = [
data.to(device) for data in batch]

pred = skip_gram(center, context_negative, net[0], net[1])
l.sum().backward()
optimizer.step()
if (i + 1) % (num_batches // 5) == 0 or i == num_batches - 1:
animator.add(epoch + (i + 1) / num_batches,
(metric[0] / metric[1],))
print(f'loss {metric[0] / metric[1]:.3f}, '
f'{metric[1] / timer.stop():.1f} tokens/sec on {str(device)}')


Now we can train a skip-gram model using negative sampling.

lr, num_epochs = 0.002, 5
train(net, data_iter, lr, num_epochs)

loss 0.407, 102071.8 tokens/sec on gpu(0)

lr, num_epochs = 0.002, 5
train(net, data_iter, lr, num_epochs)

loss 0.410, 359719.5 tokens/sec on cuda:0


## 14.4.3. Applying Word Embeddings¶

After training the word2vec model, we can use the cosine similarity of word vectors from the trained model to find words from the dictionary that are most semantically similar to an input word.

def get_similar_tokens(query_token, k, embed):
W = embed.weight.data()
x = W[vocab[query_token]]
# Compute the cosine similarity. Add 1e-9 for numerical stability
cos = np.dot(W, x) / np.sqrt(np.sum(W * W, axis=1) * np.sum(x * x) + 1e-9)
topk = npx.topk(cos, k=k+1, ret_typ='indices').asnumpy().astype('int32')
for i in topk[1:]:  # Remove the input words
print(f'cosine sim={float(cos[i]):.3f}: {vocab.to_tokens(i)}')

get_similar_tokens('chip', 3, net[0])

cosine sim=0.689: intel
cosine sim=0.569: microprocessor
cosine sim=0.540: mips

def get_similar_tokens(query_token, k, embed):
W = embed.weight.data
x = W[vocab[query_token]]
# Compute the cosine similarity. Add 1e-9 for numerical stability
cos = torch.mv(W, x) / torch.sqrt(torch.sum(W * W, dim=1) *
torch.sum(x * x) + 1e-9)
topk = torch.topk(cos, k=k+1)[1].cpu().numpy().astype('int32')
for i in topk[1:]:  # Remove the input words
print(f'cosine sim={float(cos[i]):.3f}: {vocab.to_tokens(i)}')

get_similar_tokens('chip', 3, net[0])

cosine sim=0.790: microprocessor
cosine sim=0.684: desktop
cosine sim=0.683: disks


## 14.4.4. Summary¶

• We can train a skip-gram model with negative sampling using embedding layers and the binary cross-entropy loss.

• Applications of word embeddings include finding semantically similar words for a given word based on the cosine similarity of word vectors.

## 14.4.5. Exercises¶

1. Using the trained model, find semantically similar words for other input words. Can you improve the results by tuning hyperparameters?

2. When a training corpus is huge, we often sample context words and noise words for the center words in the current minibatch when updating model parameters. In other words, the same center word may have different context words or noise words in different training epochs. What are the benefits of this method? Try to implement this training method.