.. _sec_text_preprocessing: Text Preprocessing ================== We have reviewed and evaluated statistical tools and prediction challenges for sequence data. Such data can take many forms. Specifically, as we will focus on in many chapters of the book, text is one of the most popular examples of sequence data. For example, an article can be simply viewed as a sequence of words, or even a sequence of characters. To facilitate our future experiments with sequence data, we will dedicate this section to explain common preprocessing steps for text. Usually, these steps are: 1. Load text as strings into memory. 2. Split strings into tokens (e.g., words and characters). 3. Build a table of vocabulary to map the split tokens to numerical indices. 4. Convert text into sequences of numerical indices so they can be manipulated by models easily. .. raw:: html
mxnetpytorchtensorflow
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python import collections import re from d2l import mxnet as d2l .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python import collections import re from d2l import torch as d2l .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python import collections import re from d2l import tensorflow as d2l .. raw:: html
.. raw:: html
Reading the Dataset ------------------- To get started we load text from H. G. Wells' `*The Time Machine* `__. This is a fairly small corpus of just over 30000 words, but for the purpose of what we want to illustrate this is just fine. More realistic document collections contain many billions of words. The following function reads the dataset into a list of text lines, where each line is a string. For simplicity, here we ignore punctuation and capitalization. .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python #@save d2l.DATA_HUB['time_machine'] = (d2l.DATA_URL + 'timemachine.txt', '090b5e7e70c295757f55df93cb0a180b9691891a') def read_time_machine(): #@save """Load the time machine dataset into a list of text lines.""" with open(d2l.download('time_machine'), 'r') as f: lines = f.readlines() return [re.sub('[^A-Za-z]+', ' ', line).strip().lower() for line in lines] lines = read_time_machine() print(f'# text lines: {len(lines)}') print(lines[0]) print(lines[10]) .. raw:: latex \diilbookstyleoutputcell .. parsed-literal:: :class: output # text lines: 3221 the time machine by h g wells twinkled and his usually pale face was flushed and animated the .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python #@save d2l.DATA_HUB['time_machine'] = (d2l.DATA_URL + 'timemachine.txt', '090b5e7e70c295757f55df93cb0a180b9691891a') def read_time_machine(): #@save """Load the time machine dataset into a list of text lines.""" with open(d2l.download('time_machine'), 'r') as f: lines = f.readlines() return [re.sub('[^A-Za-z]+', ' ', line).strip().lower() for line in lines] lines = read_time_machine() print(f'# text lines: {len(lines)}') print(lines[0]) print(lines[10]) .. raw:: latex \diilbookstyleoutputcell .. parsed-literal:: :class: output # text lines: 3221 the time machine by h g wells twinkled and his usually pale face was flushed and animated the .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python #@save d2l.DATA_HUB['time_machine'] = (d2l.DATA_URL + 'timemachine.txt', '090b5e7e70c295757f55df93cb0a180b9691891a') def read_time_machine(): #@save """Load the time machine dataset into a list of text lines.""" with open(d2l.download('time_machine'), 'r') as f: lines = f.readlines() return [re.sub('[^A-Za-z]+', ' ', line).strip().lower() for line in lines] lines = read_time_machine() print(f'# text lines: {len(lines)}') print(lines[0]) print(lines[10]) .. raw:: latex \diilbookstyleoutputcell .. parsed-literal:: :class: output # text lines: 3221 the time machine by h g wells twinkled and his usually pale face was flushed and animated the .. raw:: html
.. raw:: html
Tokenization ------------ The following ``tokenize`` function takes a list (``lines``) as the input, where each element is a text sequence (e.g., a text line). Each text sequence is split into a list of tokens. A *token* is the basic unit in text. In the end, a list of token lists are returned, where each token is a string. .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python def tokenize(lines, token='word'): #@save """Split text lines into word or character tokens.""" if token == 'word': return [line.split() for line in lines] elif token == 'char': return [list(line) for line in lines] else: print('ERROR: unknown token type: ' + token) tokens = tokenize(lines) for i in range(11): print(tokens[i]) .. raw:: latex \diilbookstyleoutputcell .. parsed-literal:: :class: output ['the', 'time', 'machine', 'by', 'h', 'g', 'wells'] [] [] [] [] ['i'] [] [] ['the', 'time', 'traveller', 'for', 'so', 'it', 'will', 'be', 'convenient', 'to', 'speak', 'of', 'him'] ['was', 'expounding', 'a', 'recondite', 'matter', 'to', 'us', 'his', 'grey', 'eyes', 'shone', 'and'] ['twinkled', 'and', 'his', 'usually', 'pale', 'face', 'was', 'flushed', 'and', 'animated', 'the'] .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python def tokenize(lines, token='word'): #@save """Split text lines into word or character tokens.""" if token == 'word': return [line.split() for line in lines] elif token == 'char': return [list(line) for line in lines] else: print('ERROR: unknown token type: ' + token) tokens = tokenize(lines) for i in range(11): print(tokens[i]) .. raw:: latex \diilbookstyleoutputcell .. parsed-literal:: :class: output ['the', 'time', 'machine', 'by', 'h', 'g', 'wells'] [] [] [] [] ['i'] [] [] ['the', 'time', 'traveller', 'for', 'so', 'it', 'will', 'be', 'convenient', 'to', 'speak', 'of', 'him'] ['was', 'expounding', 'a', 'recondite', 'matter', 'to', 'us', 'his', 'grey', 'eyes', 'shone', 'and'] ['twinkled', 'and', 'his', 'usually', 'pale', 'face', 'was', 'flushed', 'and', 'animated', 'the'] .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python def tokenize(lines, token='word'): #@save """Split text lines into word or character tokens.""" if token == 'word': return [line.split() for line in lines] elif token == 'char': return [list(line) for line in lines] else: print('ERROR: unknown token type: ' + token) tokens = tokenize(lines) for i in range(11): print(tokens[i]) .. raw:: latex \diilbookstyleoutputcell .. parsed-literal:: :class: output ['the', 'time', 'machine', 'by', 'h', 'g', 'wells'] [] [] [] [] ['i'] [] [] ['the', 'time', 'traveller', 'for', 'so', 'it', 'will', 'be', 'convenient', 'to', 'speak', 'of', 'him'] ['was', 'expounding', 'a', 'recondite', 'matter', 'to', 'us', 'his', 'grey', 'eyes', 'shone', 'and'] ['twinkled', 'and', 'his', 'usually', 'pale', 'face', 'was', 'flushed', 'and', 'animated', 'the'] .. raw:: html
.. raw:: html
Vocabulary ---------- The string type of the token is inconvenient to be used by models, which take numerical inputs. Now let us build a dictionary, often called *vocabulary* as well, to map string tokens into numerical indices starting from 0. To do so, we first count the unique tokens in all the documents from the training set, namely a *corpus*, and then assign a numerical index to each unique token according to its frequency. Rarely appeared tokens are often removed to reduce the complexity. Any token that does not exist in the corpus or has been removed is mapped into a special unknown token “”. We optionally add a list of reserved tokens, such as “” for padding, “” to present the beginning for a sequence, and “” for the end of a sequence. .. raw:: latex \diilbookstyleinputcell .. code:: python class Vocab: #@save """Vocabulary for text.""" def __init__(self, tokens=None, min_freq=0, reserved_tokens=None): if tokens is None: tokens = [] if reserved_tokens is None: reserved_tokens = [] # Sort according to frequencies counter = count_corpus(tokens) self._token_freqs = sorted(counter.items(), key=lambda x: x[1], reverse=True) # The index for the unknown token is 0 self.idx_to_token = [''] + reserved_tokens self.token_to_idx = {token: idx for idx, token in enumerate(self.idx_to_token)} for token, freq in self._token_freqs: if freq < min_freq: break if token not in self.token_to_idx: self.idx_to_token.append(token) self.token_to_idx[token] = len(self.idx_to_token) - 1 def __len__(self): return len(self.idx_to_token) def __getitem__(self, tokens): if not isinstance(tokens, (list, tuple)): return self.token_to_idx.get(tokens, self.unk) return [self.__getitem__(token) for token in tokens] def to_tokens(self, indices): if not isinstance(indices, (list, tuple)): return self.idx_to_token[indices] return [self.idx_to_token[index] for index in indices] @property def unk(self): # Index for the unknown token return 0 @property def token_freqs(self): # Index for the unknown token return self._token_freqs def count_corpus(tokens): #@save """Count token frequencies.""" # Here `tokens` is a 1D list or 2D list if len(tokens) == 0 or isinstance(tokens[0], list): # Flatten a list of token lists into a list of tokens tokens = [token for line in tokens for token in line] return collections.Counter(tokens) We construct a vocabulary using the time machine dataset as the corpus. Then we print the first few frequent tokens with their indices. .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python vocab = Vocab(tokens) print(list(vocab.token_to_idx.items())[:10]) .. raw:: latex \diilbookstyleoutputcell .. parsed-literal:: :class: output [('', 0), ('the', 1), ('i', 2), ('and', 3), ('of', 4), ('a', 5), ('to', 6), ('was', 7), ('in', 8), ('that', 9)] .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python vocab = Vocab(tokens) print(list(vocab.token_to_idx.items())[:10]) .. raw:: latex \diilbookstyleoutputcell .. parsed-literal:: :class: output [('', 0), ('the', 1), ('i', 2), ('and', 3), ('of', 4), ('a', 5), ('to', 6), ('was', 7), ('in', 8), ('that', 9)] .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python vocab = Vocab(tokens) print(list(vocab.token_to_idx.items())[:10]) .. raw:: latex \diilbookstyleoutputcell .. parsed-literal:: :class: output [('', 0), ('the', 1), ('i', 2), ('and', 3), ('of', 4), ('a', 5), ('to', 6), ('was', 7), ('in', 8), ('that', 9)] .. raw:: html
.. raw:: html
Now we can convert each text line into a list of numerical indices. .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python for i in [0, 10]: print('words:', tokens[i]) print('indices:', vocab[tokens[i]]) .. raw:: latex \diilbookstyleoutputcell .. parsed-literal:: :class: output words: ['the', 'time', 'machine', 'by', 'h', 'g', 'wells'] indices: [1, 19, 50, 40, 2183, 2184, 400] words: ['twinkled', 'and', 'his', 'usually', 'pale', 'face', 'was', 'flushed', 'and', 'animated', 'the'] indices: [2186, 3, 25, 1044, 362, 113, 7, 1421, 3, 1045, 1] .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python for i in [0, 10]: print('words:', tokens[i]) print('indices:', vocab[tokens[i]]) .. raw:: latex \diilbookstyleoutputcell .. parsed-literal:: :class: output words: ['the', 'time', 'machine', 'by', 'h', 'g', 'wells'] indices: [1, 19, 50, 40, 2183, 2184, 400] words: ['twinkled', 'and', 'his', 'usually', 'pale', 'face', 'was', 'flushed', 'and', 'animated', 'the'] indices: [2186, 3, 25, 1044, 362, 113, 7, 1421, 3, 1045, 1] .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python for i in [0, 10]: print('words:', tokens[i]) print('indices:', vocab[tokens[i]]) .. raw:: latex \diilbookstyleoutputcell .. parsed-literal:: :class: output words: ['the', 'time', 'machine', 'by', 'h', 'g', 'wells'] indices: [1, 19, 50, 40, 2183, 2184, 400] words: ['twinkled', 'and', 'his', 'usually', 'pale', 'face', 'was', 'flushed', 'and', 'animated', 'the'] indices: [2186, 3, 25, 1044, 362, 113, 7, 1421, 3, 1045, 1] .. raw:: html
.. raw:: html
Putting All Things Together --------------------------- Using the above functions, we package everything into the ``load_corpus_time_machine`` function, which returns ``corpus``, a list of token indices, and ``vocab``, the vocabulary of the time machine corpus. The modifications we did here are: (i) we tokenize text into characters, not words, to simplify the training in later sections; (ii) ``corpus`` is a single list, not a list of token lists, since each text line in the time machine dataset is not necessarily a sentence or a paragraph. .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python def load_corpus_time_machine(max_tokens=-1): #@save """Return token indices and the vocabulary of the time machine dataset.""" lines = read_time_machine() tokens = tokenize(lines, 'char') vocab = Vocab(tokens) # Since each text line in the time machine dataset is not necessarily a # sentence or a paragraph, flatten all the text lines into a single list corpus = [vocab[token] for line in tokens for token in line] if max_tokens > 0: corpus = corpus[:max_tokens] return corpus, vocab corpus, vocab = load_corpus_time_machine() len(corpus), len(vocab) .. raw:: latex \diilbookstyleoutputcell .. parsed-literal:: :class: output (170580, 28) .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python def load_corpus_time_machine(max_tokens=-1): #@save """Return token indices and the vocabulary of the time machine dataset.""" lines = read_time_machine() tokens = tokenize(lines, 'char') vocab = Vocab(tokens) # Since each text line in the time machine dataset is not necessarily a # sentence or a paragraph, flatten all the text lines into a single list corpus = [vocab[token] for line in tokens for token in line] if max_tokens > 0: corpus = corpus[:max_tokens] return corpus, vocab corpus, vocab = load_corpus_time_machine() len(corpus), len(vocab) .. raw:: latex \diilbookstyleoutputcell .. parsed-literal:: :class: output (170580, 28) .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python def load_corpus_time_machine(max_tokens=-1): #@save """Return token indices and the vocabulary of the time machine dataset.""" lines = read_time_machine() tokens = tokenize(lines, 'char') vocab = Vocab(tokens) # Since each text line in the time machine dataset is not necessarily a # sentence or a paragraph, flatten all the text lines into a single list corpus = [vocab[token] for line in tokens for token in line] if max_tokens > 0: corpus = corpus[:max_tokens] return corpus, vocab corpus, vocab = load_corpus_time_machine() len(corpus), len(vocab) .. raw:: latex \diilbookstyleoutputcell .. parsed-literal:: :class: output (170580, 28) .. raw:: html
.. raw:: html
Summary ------- - Text is an important form of sequence data. - To preprocess text, we usually split text into tokens, build a vocabulary to map token strings into numerical indices, and convert text data into token indices for models to manipulate. Exercises --------- 1. Tokenization is a key preprocessing step. It varies for different languages. Try to find another three commonly used methods to tokenize text. 2. In the experiment of this section, tokenize text into words and vary the ``min_freq`` arguments of the ``Vocab`` instance. How does this affect the vocabulary size? `Discussions `__