Overfitting and underfitting

#Artificial intelligence
Feb 9, 2022

What is overfitting in machine learning models? And what is underfitting?

Both overfitting and underfitting are typical challenges faced when training machine learning models. They refer to the ability of a model to generalize learned relationships. Their presence leads to high error rates of the models. As a rule, overfitting occurs more frequently than underfitting.

One speaks of overfitting when a model is too closely oriented to the training data and is therefore not able to make "good" predictions for unknown test data. This is because the model has taken too much "noise" into account during training and therefore cannot sufficiently generalize the learned relationships. Overfitting leads to a low bias and a high variance.

In contrast to overfitting, the high error rates in underfitting arise from the fact that the model is not able to correctly capture the correlations between the inputs and outputs of the training data during training. Thus, the model fails to distinguish between the relevant correlations in the data on the one hand and the interfering noise in the data on the other hand and generalizes too much. Hence the "simple" straight line in the figure showing the fit of the model to the training data set. Underfitting results in high bias and low variance.

As a rule, more complex models tend to overfit (red serpentine curve in the figure), while simpler models tend to underfit (red straight line in the figure). Both overfitting and underfitting can be counteracted in various ways. For example, the complexity of a model can be changed or the amount of training data can be increased.

Similar to the bias-variance tradeoff, with overfitting and underfitting, it is critical to find a balance between the two in order to develop an optimal model.

Source (translated): Towards Data Science and Medium

Download PDF

More contributions

Damage good. All good.

Damage good. All good.