4.4. Model Selection, Underfitting, and Overfitting¶ Open the notebook in SageMaker Studio Lab
As machine learning scientists, our goal is to discover patterns. But how can we be sure that we have truly discovered a general pattern and not simply memorized our data? For example, imagine that we wanted to hunt for patterns among genetic markers linking patients to their dementia status, where the labels are drawn from the set \(\{\text{dementia}, \text{mild cognitive impairment}, \text{healthy}\}\). Because each person’s genes identify them uniquely (ignoring identical siblings), it is possible to memorize the entire dataset.
We do not want our model to say “That’s Bob! I remember him! He has dementia!” The reason why is simple. When we deploy the model in the future, we will encounter patients that the model has never seen before. Our predictions will only be useful if our model has truly discovered a general pattern.
To recapitulate more formally, our goal is to discover patterns that capture regularities in the underlying population from which our training set was drawn. If we are successful in this endeavor, then we could successfully assess risk even for individuals that we have never encountered before. This problem—how to discover patterns that generalize—is the fundamental problem of machine learning.
The danger is that when we train models, we access just a small sample of data. The largest public image datasets contain roughly one million images. More often, we must learn from only thousands or tens of thousands of data examples. In a large hospital system, we might access hundreds of thousands of medical records. When working with finite samples, we run the risk that we might discover apparent associations that turn out not to hold up when we collect more data.
The phenomenon of fitting our training data more closely than we fit the underlying distribution is called overfitting, and the techniques used to combat overfitting are called regularization. In the previous sections, you might have observed this effect while experimenting with the Fashion-MNIST dataset. If you altered the model structure or the hyperparameters during the experiment, you might have noticed that with enough neurons, layers, and training epochs, the model can eventually reach perfect accuracy on the training set, even as the accuracy on test data deteriorates.
4.4.1. Training Error and Generalization Error¶
In order to discuss this phenomenon more formally, we need to differentiate between training error and generalization error. The training error is the error of our model as calculated on the training dataset, while generalization error is the expectation of our model’s error were we to apply it to an infinite stream of additional data examples drawn from the same underlying data distribution as our original sample.
Problematically, we can never calculate the generalization error exactly. That is because the stream of infinite data is an imaginary object. In practice, we must estimate the generalization error by applying our model to an independent test set constituted of a random selection of data examples that were withheld from our training set.
The following three thought experiments will help illustrate this situation better. Consider a college student trying to prepare for his final exam. A diligent student will strive to practice well and test his abilities using exams from previous years. Nonetheless, doing well on past exams is no guarantee that he will excel when it matters. For instance, the student might try to prepare by rote learning the answers to the exam questions. This requires the student to memorize many things. She might even remember the answers for past exams perfectly. Another student might prepare by trying to understand the reasons for giving certain answers. In most cases, the latter student will do much better.
Likewise, consider a model that simply uses a lookup table to answer questions. If the set of allowable inputs is discrete and reasonably small, then perhaps after viewing many training examples, this approach would perform well. Still this model has no ability to do better than random guessing when faced with examples that it has never seen before. In reality the input spaces are far too large to memorize the answers corresponding to every conceivable input. For example, consider the black and white \(28\times28\) images. If each pixel can take one among \(256\) grayscale values, then there are \(256^{784}\) possible images. That means that there are far more low-resolution grayscale thumbnail-sized images than there are atoms in the universe. Even if we could encounter such data, we could never afford to store the lookup table.
Last, consider the problem of trying to classify the outcomes of coin tosses (class 0: heads, class 1: tails) based on some contextual features that might be available. Suppose that the coin is fair. No matter what algorithm we come up with, the generalization error will always be \(\frac{1}{2}\). However, for most algorithms, we should expect our training error to be considerably lower, depending on the luck of the draw, even if we did not have any features! Consider the dataset {0, 1, 1, 1, 0, 1}. Our feature-less algorithm would have to fall back on always predicting the majority class, which appears from our limited sample to be 1. In this case, the model that always predicts class 1 will incur an error of \(\frac{1}{3}\), considerably better than our generalization error. As we increase the amount of data, the probability that the fraction of heads will deviate significantly from \(\frac{1}{2}\) diminishes, and our training error would come to match the generalization error.
4.4.1.1. Statistical Learning Theory¶
Since generalization is the fundamental problem in machine learning, you might not be surprised to learn that many mathematicians and theorists have dedicated their lives to developing formal theories to describe this phenomenon. In their eponymous theorem, Glivenko and Cantelli derived the rate at which the training error converges to the generalization error. In a series of seminal papers, Vapnik and Chervonenkis extended this theory to more general classes of functions. This work laid the foundations of statistical learning theory.
In the standard supervised learning setting, which we have addressed up until now and will stick with throughout most of this book, we assume that both the training data and the test data are drawn independently from identical distributions. This is commonly called the i.i.d. assumption, which means that the process that samples our data has no memory. In other words, the second example drawn and the third drawn are no more correlated than the second and the two-millionth sample drawn.
Being a good machine learning scientist requires thinking critically, and already you should be poking holes in this assumption, coming up with common cases where the assumption fails. What if we train a mortality risk predictor on data collected from patients at UCSF Medical Center, and apply it on patients at Massachusetts General Hospital? These distributions are simply not identical. Moreover, draws might be correlated in time. What if we are classifying the topics of Tweets? The news cycle would create temporal dependencies in the topics being discussed, violating any assumptions of independence.
Sometimes we can get away with minor violations of the i.i.d. assumption and our models will continue to work remarkably well. After all, nearly every real-world application involves at least some minor violation of the i.i.d. assumption, and yet we have many useful tools for various applications such as face recognition, speech recognition, and language translation.
Other violations are sure to cause trouble. Imagine, for example, if we try to train a face recognition system by training it exclusively on university students and then want to deploy it as a tool for monitoring geriatrics in a nursing home population. This is unlikely to work well since college students tend to look considerably different from the elderly.
In subsequent chapters, we will discuss problems arising from violations of the i.i.d. assumption. For now, even taking the i.i.d. assumption for granted, understanding generalization is a formidable problem. Moreover, elucidating the precise theoretical foundations that might explain why deep neural networks generalize as well as they do continues to vex the greatest minds in learning theory.
When we train our models, we attempt to search for a function that fits the training data as well as possible. If the function is so flexible that it can catch on to spurious patterns just as easily as to true associations, then it might perform too well without producing a model that generalizes well to unseen data. This is precisely what we want to avoid or at least control. Many of the techniques in deep learning are heuristics and tricks aimed at guarding against overfitting.
4.4.1.2. Model Complexity¶
When we have simple models and abundant data, we expect the generalization error to resemble the training error. When we work with more complex models and fewer examples, we expect the training error to go down but the generalization gap to grow. What precisely constitutes model complexity is a complex matter. Many factors govern whether a model will generalize well. For example a model with more parameters might be considered more complex. A model whose parameters can take a wider range of values might be more complex. Often with neural networks, we think of a model that takes more training iterations as more complex, and one subject to early stopping (fewer training iterations) as less complex.
It can be difficult to compare the complexity among members of substantially different model classes (say, decision trees vs. neural networks). For now, a simple rule of thumb is quite useful: a model that can readily explain arbitrary facts is what statisticians view as complex, whereas one that has only a limited expressive power but still manages to explain the data well is probably closer to the truth. In philosophy, this is closely related to Popper’s criterion of falsifiability of a scientific theory: a theory is good if it fits data and if there are specific tests that can be used to disprove it. This is important since all statistical estimation is post hoc, i.e., we estimate after we observe the facts, hence vulnerable to the associated fallacy. For now, we will put the philosophy aside and stick to more tangible issues.
In this section, to give you some intuition, we will focus on a few factors that tend to influence the generalizability of a model class:
The number of tunable parameters. When the number of tunable parameters, sometimes called the degrees of freedom, is large, models tend to be more susceptible to overfitting.
The values taken by the parameters. When weights can take a wider range of values, models can be more susceptible to overfitting.
The number of training examples. It is trivially easy to overfit a dataset containing only one or two examples even if your model is simple. But overfitting a dataset with millions of examples requires an extremely flexible model.
4.4.2. Model Selection¶
In machine learning, we usually select our final model after evaluating several candidate models. This process is called model selection. Sometimes the models subject to comparison are fundamentally different in nature (say, decision trees vs. linear models). At other times, we are comparing members of the same class of models that have been trained with different hyperparameter settings.
With MLPs, for example, we may wish to compare models with different numbers of hidden layers, different numbers of hidden units, and various choices of the activation functions applied to each hidden layer. In order to determine the best among our candidate models, we will typically employ a validation dataset.
4.4.2.1. Validation Dataset¶
In principle we should not touch our test set until after we have chosen all our hyperparameters. Were we to use the test data in the model selection process, there is a risk that we might overfit the test data. Then we would be in serious trouble. If we overfit our training data, there is always the evaluation on test data to keep us honest. But if we overfit the test data, how would we ever know?
Thus, we should never rely on the test data for model selection. And yet we cannot rely solely on the training data for model selection either because we cannot estimate the generalization error on the very data that we use to train the model.
In practical applications, the picture gets muddier. While ideally we would only touch the test data once, to assess the very best model or to compare a small number of models to each other, real-world test data is seldom discarded after just one use. We can seldom afford a new test set for each round of experiments.
The common practice to address this problem is to split our data three ways, incorporating a validation dataset (or validation set) in addition to the training and test datasets. The result is a murky practice where the boundaries between validation and test data are worryingly ambiguous. Unless explicitly stated otherwise, in the experiments in this book we are really working with what should rightly be called training data and validation data, with no true test sets. Therefore, the accuracy reported in each experiment of the book is really the validation accuracy and not a true test set accuracy.
4.4.2.2. \(K\)-Fold Cross-Validation¶
When training data is scarce, we might not even be able to afford to hold out enough data to constitute a proper validation set. One popular solution to this problem is to employ \(K\)-fold cross-validation. Here, the original training data is split into \(K\) non-overlapping subsets. Then model training and validation are executed \(K\) times, each time training on \(K-1\) subsets and validating on a different subset (the one not used for training in that round). Finally, the training and validation errors are estimated by averaging over the results from the \(K\) experiments.
4.4.3. Underfitting or Overfitting?¶
When we compare the training and validation errors, we want to be mindful of two common situations. First, we want to watch out for cases when our training error and validation error are both substantial but there is a little gap between them. If the model is unable to reduce the training error, that could mean that our model is too simple (i.e., insufficiently expressive) to capture the pattern that we are trying to model. Moreover, since the generalization gap between our training and validation errors is small, we have reason to believe that we could get away with a more complex model. This phenomenon is known as underfitting.
On the other hand, as we discussed above, we want to watch out for the cases when our training error is significantly lower than our validation error, indicating severe overfitting. Note that overfitting is not always a bad thing. With deep learning especially, it is well known that the best predictive models often perform far better on training data than on holdout data. Ultimately, we usually care more about the validation error than about the gap between the training and validation errors.
Whether we overfit or underfit can depend both on the complexity of our model and the size of the available training datasets, two topics that we discuss below.
4.4.3.1. Model Complexity¶
To illustrate some classical intuition about overfitting and model complexity, we give an example using polynomials. Given training data consisting of a single feature \(x\) and a corresponding real-valued label \(y\), we try to find the polynomial of degree \(d\)
to estimate the labels \(y\). This is just a linear regression problem where our features are given by the powers of \(x\), the model’s weights are given by \(w_i\), and the bias is given by \(w_0\) since \(x^0 = 1\) for all \(x\). Since this is just a linear regression problem, we can use the squared error as our loss function.
A higher-order polynomial function is more complex than a lower-order polynomial function, since the higher-order polynomial has more parameters and the model function’s selection range is wider. Fixing the training dataset, higher-order polynomial functions should always achieve lower (at worst, equal) training error relative to lower degree polynomials. In fact, whenever the data examples each have a distinct value of \(x\), a polynomial function with degree equal to the number of data examples can fit the training set perfectly. We visualize the relationship between polynomial degree and underfitting vs. overfitting in Fig. 4.4.1.
4.4.3.2. Dataset Size¶
The other big consideration to bear in mind is the dataset size. Fixing our model, the fewer samples we have in the training dataset, the more likely (and more severely) we are to encounter overfitting. As we increase the amount of training data, the generalization error typically decreases. Moreover, in general, more data never hurt. For a fixed task and data distribution, there is typically a relationship between model complexity and dataset size. Given more data, we might profitably attempt to fit a more complex model. Absent sufficient data, simpler models may be more difficult to beat. For many tasks, deep learning only outperforms linear models when many thousands of training examples are available. In part, the current success of deep learning owes to the current abundance of massive datasets due to Internet companies, cheap storage, connected devices, and the broad digitization of the economy.
4.4.4. Polynomial Regression¶
We can now explore these concepts interactively by fitting polynomials to data.
import math
from mxnet import gluon, np, npx
from mxnet.gluon import nn
from d2l import mxnet as d2l
npx.set_np()
import math
import numpy as np
import torch
from torch import nn
from d2l import torch as d2l
import math
import numpy as np
import tensorflow as tf
from d2l import tensorflow as d2l
4.4.4.1. Generating the Dataset¶
First we need data. Given \(x\), we will use the following cubic polynomial to generate the labels on training and test data:
The noise term \(\epsilon\) obeys a normal distribution with a mean of 0 and a standard deviation of 0.1. For optimization, we typically want to avoid very large values of gradients or losses. This is why the features are rescaled from \(x^i\) to \(\frac{x^i}{i!}\). It allows us to avoid very large values for large exponents \(i\). We will synthesize 100 samples each for the training set and test set.
max_degree = 20 # Maximum degree of the polynomial
n_train, n_test = 100, 100 # Training and test dataset sizes
true_w = np.zeros(max_degree) # Allocate lots of empty space
true_w[0:4] = np.array([5, 1.2, -3.4, 5.6])
features = np.random.normal(size=(n_train + n_test, 1))
np.random.shuffle(features)
poly_features = np.power(features, np.arange(max_degree).reshape(1, -1))
for i in range(max_degree):
poly_features[:, i] /= math.gamma(i + 1) # `gamma(n)` = (n-1)!
# Shape of `labels`: (`n_train` + `n_test`,)
labels = np.dot(poly_features, true_w)
labels += np.random.normal(scale=0.1, size=labels.shape)
Again, monomials stored in poly_features
are rescaled by the gamma
function, where \(\Gamma(n)=(n-1)!\). Take a look at the first 2
samples from the generated dataset. The value 1 is technically a
feature, namely the constant feature corresponding to the bias.
features[:2], poly_features[:2, :], labels[:2]
(array([[-0.03716067],
[-1.1468065 ]]),
array([[ 1.0000000e+00, -3.7160669e-02, 6.9045764e-04, -8.5526226e-06,
7.9455290e-08, -5.9052235e-10, 3.6573678e-12, -1.9415747e-14,
9.0187767e-17, -3.7238198e-19, 1.3837963e-21, -4.6747996e-24,
1.4476556e-26, -4.1381425e-29, 1.0984010e-31, -2.7211542e-34,
6.3199942e-37, -1.3815009e-39, 2.8516424e-42, -5.6051939e-45],
[ 1.0000000e+00, -1.1468065e+00, 6.5758252e-01, -2.5137332e-01,
7.2069131e-02, -1.6529869e-02, 3.1594271e-03, -5.1760738e-04,
7.4199430e-05, -9.4547095e-06, 1.0842723e-06, -1.1304095e-07,
1.0803007e-08, -9.5299690e-10, 7.8064499e-11, -5.9683248e-12,
4.2778208e-13, -2.8857840e-14, 1.8385754e-15, -1.1097317e-16]]),
array([ 5.1432443 , -0.06415121]))
# Convert from NumPy ndarrays to tensors
true_w, features, poly_features, labels = [torch.tensor(x, dtype=
torch.float32) for x in [true_w, features, poly_features, labels]]
features[:2], poly_features[:2, :], labels[:2]
(tensor([[-0.4880],
[-0.3300]]),
tensor([[ 1.0000e+00, -4.8800e-01, 1.1907e-01, -1.9369e-02, 2.3631e-03,
-2.3064e-04, 1.8759e-05, -1.3078e-06, 7.9774e-08, -4.3255e-09,
2.1109e-10, -9.3647e-12, 3.8083e-13, -1.4296e-14, 4.9832e-16,
-1.6212e-17, 4.9447e-19, -1.4194e-20, 3.8483e-22, -9.8840e-24],
[ 1.0000e+00, -3.2998e-01, 5.4442e-02, -5.9881e-03, 4.9398e-04,
-3.2600e-05, 1.7929e-06, -8.4516e-08, 3.4860e-09, -1.2781e-10,
4.2174e-12, -1.2651e-13, 3.4789e-15, -8.8303e-17, 2.0813e-18,
-4.5784e-20, 9.4423e-22, -1.8328e-23, 3.3598e-25, -5.8351e-27]]),
tensor([3.9316, 4.4468]))
# Convert from NumPy ndarrays to tensors
true_w, features, poly_features, labels = [tf.constant(x, dtype=
tf.float32) for x in [true_w, features, poly_features, labels]]
features[:2], poly_features[:2, :], labels[:2]
(<tf.Tensor: shape=(2, 1), dtype=float32, numpy=
array([[-0.4181114],
[-1.9106686]], dtype=float32)>,
<tf.Tensor: shape=(2, 20), dtype=float32, numpy=
array([[ 1.0000000e+00, -4.1811141e-01, 8.7408580e-02, -1.2182175e-02,
1.2733766e-03, -1.0648266e-04, 7.4202694e-06, -4.4321419e-07,
2.3164114e-08, -1.0761312e-09, 4.4994272e-11, -1.7102381e-12,
5.9589174e-14, -1.9165317e-15, 5.7237418e-17, -1.5954412e-18,
4.1692010e-20, -1.0254062e-21, 2.3818557e-23, -5.2414793e-25],
[ 1.0000000e+00, -1.9106686e+00, 1.8253274e+00, -1.1625319e+00,
5.5530334e-01, -2.1220013e-01, 6.7574024e-02, -1.8444510e-02,
4.4051684e-03, -9.3520188e-04, 1.7868608e-04, -3.1037263e-05,
4.9418272e-06, -7.2632264e-07, 9.9125849e-08, -1.2626444e-08,
1.5078094e-09, -1.6946612e-10, 1.7988534e-11, -1.8089541e-12]],
dtype=float32)>,
<tf.Tensor: shape=(2,), dtype=float32, numpy=array([ 4.0812197, -10.005553 ], dtype=float32)>)
4.4.4.2. Training and Testing the Model¶
Let us first implement a function to evaluate the loss on a given dataset.
def evaluate_loss(net, data_iter, loss): #@save
"""Evaluate the loss of a model on the given dataset."""
metric = d2l.Accumulator(2) # Sum of losses, no. of examples
for X, y in data_iter:
l = loss(net(X), y)
metric.add(l.sum(), d2l.size(l))
return metric[0] / metric[1]
def evaluate_loss(net, data_iter, loss): #@save
"""Evaluate the loss of a model on the given dataset."""
metric = d2l.Accumulator(2) # Sum of losses, no. of examples
for X, y in data_iter:
out = net(X)
y = y.reshape(out.shape)
l = loss(out, y)
metric.add(l.sum(), l.numel())
return metric[0] / metric[1]
def evaluate_loss(net, data_iter, loss): #@save
"""Evaluate the loss of a model on the given dataset."""
metric = d2l.Accumulator(2) # Sum of losses, no. of examples
for X, y in data_iter:
l = loss(net(X), y)
metric.add(tf.reduce_sum(l), d2l.size(l))
return metric[0] / metric[1]
Now define the training function.
def train(train_features, test_features, train_labels, test_labels,
num_epochs=400):
loss = gluon.loss.L2Loss()
net = nn.Sequential()
# Switch off the bias since we already catered for it in the polynomial
# features
net.add(nn.Dense(1, use_bias=False))
net.initialize()
batch_size = min(10, train_labels.shape[0])
train_iter = d2l.load_array((train_features, train_labels), batch_size)
test_iter = d2l.load_array((test_features, test_labels), batch_size,
is_train=False)
trainer = gluon.Trainer(net.collect_params(), 'sgd',
{'learning_rate': 0.01})
animator = d2l.Animator(xlabel='epoch', ylabel='loss', yscale='log',
xlim=[1, num_epochs], ylim=[1e-3, 1e2],
legend=['train', 'test'])
for epoch in range(num_epochs):
d2l.train_epoch_ch3(net, train_iter, loss, trainer)
if epoch == 0 or (epoch + 1) % 20 == 0:
animator.add(epoch + 1, (evaluate_loss(net, train_iter, loss),
evaluate_loss(net, test_iter, loss)))
print('weight:', net[0].weight.data().asnumpy())
def train(train_features, test_features, train_labels, test_labels,
num_epochs=400):
loss = nn.MSELoss(reduction='none')
input_shape = train_features.shape[-1]
# Switch off the bias since we already catered for it in the polynomial
# features
net = nn.Sequential(nn.Linear(input_shape, 1, bias=False))
batch_size = min(10, train_labels.shape[0])
train_iter = d2l.load_array((train_features, train_labels.reshape(-1,1)),
batch_size)
test_iter = d2l.load_array((test_features, test_labels.reshape(-1,1)),
batch_size, is_train=False)
trainer = torch.optim.SGD(net.parameters(), lr=0.01)
animator = d2l.Animator(xlabel='epoch', ylabel='loss', yscale='log',
xlim=[1, num_epochs], ylim=[1e-3, 1e2],
legend=['train', 'test'])
for epoch in range(num_epochs):
d2l.train_epoch_ch3(net, train_iter, loss, trainer)
if epoch == 0 or (epoch + 1) % 20 == 0:
animator.add(epoch + 1, (evaluate_loss(net, train_iter, loss),
evaluate_loss(net, test_iter, loss)))
print('weight:', net[0].weight.data.numpy())
def train(train_features, test_features, train_labels, test_labels,
num_epochs=400):
loss = tf.losses.MeanSquaredError()
input_shape = train_features.shape[-1]
# Switch off the bias since we already catered for it in the polynomial
# features
net = tf.keras.Sequential()
net.add(tf.keras.layers.Dense(1, use_bias=False))
batch_size = min(10, train_labels.shape[0])
train_iter = d2l.load_array((train_features, train_labels), batch_size)
test_iter = d2l.load_array((test_features, test_labels), batch_size,
is_train=False)
trainer = tf.keras.optimizers.SGD(learning_rate=.01)
animator = d2l.Animator(xlabel='epoch', ylabel='loss', yscale='log',
xlim=[1, num_epochs], ylim=[1e-3, 1e2],
legend=['train', 'test'])
for epoch in range(num_epochs):
d2l.train_epoch_ch3(net, train_iter, loss, trainer)
if epoch == 0 or (epoch + 1) % 20 == 0:
animator.add(epoch + 1, (evaluate_loss(net, train_iter, loss),
evaluate_loss(net, test_iter, loss)))
print('weight:', net.get_weights()[0].T)
4.4.4.3. Third-Order Polynomial Function Fitting (Normal)¶
We will begin by first using a third-order polynomial function, which is the same order as that of the data generation function. The results show that this model’s training and test losses can be both effectively reduced. The learned model parameters are also close to the true values \(w = [5, 1.2, -3.4, 5.6]\).
# Pick the first four dimensions, i.e., 1, x, x^2/2!, x^3/3! from the
# polynomial features
train(poly_features[:n_train, :4], poly_features[n_train:, :4],
labels[:n_train], labels[n_train:])
weight: [[ 5.0192943 1.2223521 -3.4234805 5.571871 ]]
# Pick the first four dimensions, i.e., 1, x, x^2/2!, x^3/3! from the
# polynomial features
train(poly_features[:n_train, :4], poly_features[n_train:, :4],
labels[:n_train], labels[n_train:])
weight: [[ 5.007622 1.2259083 -3.4264183 5.5359674]]
# Pick the first four dimensions, i.e., 1, x, x^2/2!, x^3/3! from the
# polynomial features
train(poly_features[:n_train, :4], poly_features[n_train:, :4],
labels[:n_train], labels[n_train:])
weight: [[ 4.996336 1.2113124 -3.374242 5.577011 ]]
4.4.4.4. Linear Function Fitting (Underfitting)¶
Let us take another look at linear function fitting. After the decline in early epochs, it becomes difficult to further decrease this model’s training loss. After the last epoch iteration has been completed, the training loss is still high. When used to fit nonlinear patterns (like the third-order polynomial function here) linear models are liable to underfit.
# Pick the first two dimensions, i.e., 1, x, from the polynomial features
train(poly_features[:n_train, :2], poly_features[n_train:, :2],
labels[:n_train], labels[n_train:])
weight: [[2.7020211 4.2250185]]
# Pick the first two dimensions, i.e., 1, x, from the polynomial features
train(poly_features[:n_train, :2], poly_features[n_train:, :2],
labels[:n_train], labels[n_train:])
weight: [[3.1913412 5.462743 ]]
# Pick the first two dimensions, i.e., 1, x, from the polynomial features
train(poly_features[:n_train, :2], poly_features[n_train:, :2],
labels[:n_train], labels[n_train:])
weight: [[3.3811035 3.3968449]]
4.4.4.5. Higher-Order Polynomial Function Fitting (Overfitting)¶
Now let us try to train the model using a polynomial of too high degree. Here, there are insufficient data to learn that the higher-degree coefficients should have values close to zero. As a result, our overly-complex model is so susceptible that it is being influenced by noise in the training data. Though the training loss can be effectively reduced, the test loss is still much higher. It shows that the complex model overfits the data.
# Pick all the dimensions from the polynomial features
train(poly_features[:n_train, :], poly_features[n_train:, :],
labels[:n_train], labels[n_train:], num_epochs=1500)
weight: [[ 4.9921722 1.3058262 -3.3530521 5.1164474 -0.11139771 1.3029987
0.12676814 0.1665019 0.05129562 -0.02275684 0.00806194 -0.05167864
-0.02426299 -0.01502208 -0.04941354 0.06389863 -0.04761845 -0.04380166
-0.05188227 0.05655775]]
# Pick all the dimensions from the polynomial features
train(poly_features[:n_train, :], poly_features[n_train:, :],
labels[:n_train], labels[n_train:], num_epochs=1500)
weight: [[ 5.0143905 1.2483506 -3.4688613 5.3863044 0.128645 0.57388455
0.25032377 0.03927511 -0.16668083 0.19558907 0.0273363 -0.08089693
0.09546543 0.21860437 -0.02041928 0.10810804 -0.02953252 0.20599808
0.16862932 0.09344002]]
# Pick all the dimensions from the polynomial features
train(poly_features[:n_train, :], poly_features[n_train:, :],
labels[:n_train], labels[n_train:], num_epochs=1500)
weight: [[ 4.9543247 1.27297 -3.1677556 5.126848 -0.46584105 1.5038651
-0.23602088 0.1550852 0.23561436 -0.34404495 -0.3402495 -0.08933819
0.29152173 -0.47826046 0.01498975 0.19857779 -0.44675142 -0.24792081
-0.25173482 0.48487788]]
In the subsequent sections, we will continue to discuss overfitting problems and methods for dealing with them, such as weight decay and dropout.
4.4.5. Summary¶
Since the generalization error cannot be estimated based on the training error, simply minimizing the training error will not necessarily mean a reduction in the generalization error. Machine learning models need to be careful to safeguard against overfitting so as to minimize the generalization error.
A validation set can be used for model selection, provided that it is not used too liberally.
Underfitting means that a model is not able to reduce the training error. When training error is much lower than validation error, there is overfitting.
We should choose an appropriately complex model and avoid using insufficient training samples.
4.4.6. Exercises¶
Can you solve the polynomial regression problem exactly? Hint: use linear algebra.
Consider model selection for polynomials:
Plot the training loss vs. model complexity (degree of the polynomial). What do you observe? What degree of polynomial do you need to reduce the training loss to 0?
Plot the test loss in this case.
Generate the same plot as a function of the amount of data.
What happens if you drop the normalization (\(1/i!\)) of the polynomial features \(x^i\)? Can you fix this in some other way?
Can you ever expect to see zero generalization error?