One of the key steps in building a machine learning model is to estimate its performance on data that the model hasn't seen before.
Let's assume that we fit our model on a training dataset and use the same data to estimate how well it performs in practice. Such model can either suffer from underfitting(high bias) if the model is too simple, or it can overfitting the training data(high variance) if the model is too complex for the underlying training data. To find an acceptable bias-variance trade-off, we need to evaluate our model carefully.
1. Holdout cross-validation
2. K-fold cross-validation
which can help us to obtain reliable estimates of the model's generalization error, that is, how well the model performs on unseen data.
Let's assume that we fit our model on a training dataset and use the same data to estimate how well it performs in practice. Such model can either suffer from underfitting(high bias) if the model is too simple, or it can overfitting the training data(high variance) if the model is too complex for the underlying training data. To find an acceptable bias-variance trade-off, we need to evaluate our model carefully.
1. Holdout cross-validation
2. K-fold cross-validation
which can help us to obtain reliable estimates of the model's generalization error, that is, how well the model performs on unseen data.
No comments:
Post a Comment