In machine learning (ML), a learning curve (or training curve) is a graphical representation that shows how a model's performance on a training set (and usually a validation set) changes with the number of training iterations (epochs) or the amount of training data. Typically, the number of training epochs or training set size is plotted on the x-axis, and the value of the loss function (and possibly some other metric such as the cross-validation score) on the y-axis.

Synonyms include error curve, experience curve, improvement curve and generalization curve.

More abstractly, learning curves plot the difference between learning effort and predictive performance, where "learning effort" usually means the number of training samples, and "predictive performance" means accuracy on testing samples.

Learning curves have many useful purposes in ML, including:

  • choosing model parameters during design,
  • adjusting optimization to improve convergence,
  • and diagnosing problems such as overfitting (or underfitting).

Learning curves can also be tools for determining how much a model benefits from adding more training data, and whether the model suffers more from a variance error or a bias error. If both the validation score and the training score converge to a certain value, then the model will no longer significantly benefit from more training data.

Formal definition

When creating a function to approximate the distribution of some data, it is necessary to define a loss function L ( f θ ( X ) , Y ) {\displaystyle L(f_{\theta }(X),Y)} to measure how good the model output is (e.g., accuracy for classification tasks or mean squared error for regression). We then define an optimization process which finds model parameters θ {\displaystyle \theta } such that L ( f θ ( X ) , Y ) {\displaystyle L(f_{\theta }(X),Y)} is minimized, referred to as θ {\displaystyle \theta ^{*}} .

Training curve for amount of data

If the training data is

{ x 1 , x 2 , , x n } , { y 1 , y 2 , y n } {\displaystyle \{x_{1},x_{2},\dots ,x_{n}\},\{y_{1},y_{2},\dots y_{n}\}}

and the validation data is

{ x 1 , x 2 , x m } , { y 1 , y 2 , y m } {\displaystyle \{x_{1}',x_{2}',\dots x_{m}'\},\{y_{1}',y_{2}',\dots y_{m}'\}} ,

a learning curve is the plot of the two curves

  1. i L ( f θ ( X i , Y i ) ( X i ) , Y i ) {\displaystyle i\mapsto L(f_{\theta ^{*}(X_{i},Y_{i})}(X_{i}),Y_{i})}
  2. i L ( f θ ( X i , Y i ) ( X i ) , Y i ) {\displaystyle i\mapsto L(f_{\theta ^{*}(X_{i},Y_{i})}(X_{i}'),Y_{i}')}

where X i = { x 1 , x 2 , x i } {\displaystyle X_{i}=\{x_{1},x_{2},\dots x_{i}\}}

Training curve for number of iterations

Many optimization algorithms are iterative, repeating the same step (such as backpropagation) until the process converges to an optimal value. Gradient descent is one such algorithm. If θ i {\displaystyle \theta _{i}^{*}} is the approximation of the optimal θ {\displaystyle \theta } after i {\displaystyle i} steps, a learning curve is the plot of

  1. i L ( f θ i ( X , Y ) ( X ) , Y ) {\displaystyle i\mapsto L(f_{\theta _{i}^{*}(X,Y)}(X),Y)}
  2. i L ( f θ i ( X , Y ) ( X ) , Y ) {\displaystyle i\mapsto L(f_{\theta _{i}^{*}(X,Y)}(X'),Y')}

See also

  • Overfitting
  • Bias–variance tradeoff
  • Model selection
  • Cross-validation (statistics)
  • Validity (statistics)
  • Verification and validation
  • Double descent

References


python how to interpret learning curve in machine learning? Stack

What Is a Learning Curve in Machine Learning? Baeldung on Computer

Learning Curve Machine Learning, Deep Learning, and Computer Vision

Learning Curve Machine Learning In Powerpoint And Google Slides Cpb PPT

Learning Curve of machine learning model with the size of dataset used