Friday, January 13, 2017

Gradient Descent

Stochastic gradient descent is a popular algorithm for training a wide range of models in machine learning, including (linear) support vector machines, logistic regression (see, e.g., Vowpal Wabbit) and graphical models.[6] When combined with the backpropagation algorithm, it is the de facto standard algorithm for training artificial neural networks.

Stochastic gradient descent competes with the L-BFGS algorithm,[citation needed] which is also widely used. Stochastic gradient descent has been used since at least 1960 for training linear regression models, originally under the name ADALINE.

Another popular stochastic gradient descent algorithm is the least mean squares (LMS) adaptive filter.


Gradient Boosting = Gradient Descent + Boosting


No comments:

Post a Comment