One useful rule of thumb is that you may be overfitting when your model's performance on its own training set is much better than on its held-out validation set or in a cross-validation setting. That's not all there is to it, though.
The blog entry I linked to describes a procedure for testing for overfit: plot training set and validation set error as a function of training set size. If they show a stable gap at the right end of the plot, you're probably overfitting.
Use a held-out test set. Only do evaluation on this set when you're completely done with model selection (hyperparameter tuning); don't train on it, don't use it in (cross-)validation. The score you get on the test set is the model's final evaluation. This should show whether you've accidentally overfit the validation set(s).
[Machine learning conferences are sometimes set up like a competition, where the test set is not given to the researchers until after they've delivered their final model to the organisers. In the meanwhile, they can use the training set as they please, e.g. by testing models using cross-validation.Kaggle does something similar.]
Because you can tune the model as much as you want in this cross-validation setting, until it performs nearly perfectly in CV.
As an extreme example, suppose that you've implemented an estimator that is essentially a random number generator. You can keep trying random seeds until you hit a "model" that produces very low error in cross-validation, but that doesn't you've hit the right model. It means you've overfit to the cross-validation.
See also this interesting warstory.
|
Friday, December 4, 2015
How to Detect Overfitting
Thursday, December 3, 2015
Tackling overfitting via regularization
Overfitting is a common problem in machine learning, where a model performs well on training data but does not generalize well to unseen data (test data). If a model suffers from overfitting, we also say that the model has a high variance, which can be caused by having too many parameters that lead to a model that is too complex given the underlying data. Similarly, our model can also suffer from underfitting (high bias), which means that our model is not complex enough to capture the pattern in the training data well and therefore also suffers from low performance
on unseen data.
on unseen data.
Why do I get different regression outputs in SAS and in Python
How to Get the Row Count of A Pandas Dataframe?
In [1]: import numpy as np
In [2]: import pandas as pd
In [3]: df =pd.DataFrame(np.arange(9).reshape(3,3))
In [4]: df
Out[4]:
0 1 2
0 0 1 2
1 3 4 5
2 6 7 8
In [5]: df.shape
Out[5]: (3, 3)
In [6]: len(df.index)
Out[6]: (3, 3)
Subscribe to:
Posts (Atom)