Variance-Reduced Methods for Machine Learning
Résumé
Stochastic optimization lies at the heart of machine learning, and its cornerstone is stochastic gradient descent (SGD), a method introduced over 60 years ago. The last 8 years have seen an exciting new development: variance reduction (VR) for stochastic optimization methods. These VR methods excel in settings where more than one pass through the training data is allowed, achieving a faster convergence than SGD in theory as well as practice. These speedups underline the surge of interest in VR methods and the fast-growing body of work on this topic. This review covers the key principles and main developments behind VR methods for optimization with finite data sets and is aimed at non-expert readers. We focus mainly on the convex setting, and leave pointers to readers interested in extensions for minimizing non-convex functions. optimization, machine learning, variance reduction * The classic way to implement GD is to determine γ as the approximate solution to min γ>0 f (x k − γ∇f (x k)). This is called a line search since it is an optimization over a
Domaines
Apprentissage [cs.LG]Origine | Fichiers produits par l'(les) auteur(s) |
---|---|
Licence |
Domaine public
|