In simple terms, regularization is tuning or selecting the preferred level of model complexity so your models are better at predicting (generalizing). If you don't do this your models may be too complex and overfit or too simple and underfit, either way giving poor predictions.
To regularize you need 2 things:
- A way of testing how good your models are at prediction, for example using cross-validation or a set of validation data (you can't use the fitting error for this).
- A tuning parameter which lets you change the complexity or smoothness of the model, or a selection of models of differing complexity/smoothness.
No comments:
Post a Comment