Variance-Reduced Methods for Machine Learning - Télécom Paris
Article Dans Une Revue Proceedings of the IEEE Année : 2020

Variance-Reduced Methods for Machine Learning

Résumé

Stochastic optimization lies at the heart of machine learning, and its cornerstone is stochastic gradient descent (SGD), a method introduced over 60 years ago. The last 8 years have seen an exciting new development: variance reduction (VR) for stochastic optimization methods. These VR methods excel in settings where more than one pass through the training data is allowed, achieving a faster convergence than SGD in theory as well as practice. These speedups underline the surge of interest in VR methods and the fast-growing body of work on this topic. This review covers the key principles and main developments behind VR methods for optimization with finite data sets and is aimed at non-expert readers. We focus mainly on the convex setting, and leave pointers to readers interested in extensions for minimizing non-convex functions. optimization, machine learning, variance reduction * The classic way to implement GD is to determine γ as the approximate solution to min γ>0 f (x k − γ∇f (x k)). This is called a line search since it is an optimization over a
Fichier principal
Vignette du fichier
VR-IEEE.pdf (7.71 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Licence
Domaine public

Dates et versions

hal-04182657 , version 1 (17-08-2023)

Licence

Domaine public

Identifiants

Citer

Robert M Gower, Mark Schmidt, Francis Bach, Peter Richtárik. Variance-Reduced Methods for Machine Learning. Proceedings of the IEEE, 2020, 108 (11), ⟨10.1109/JPROC.2020.3028013⟩. ⟨hal-04182657⟩
39 Consultations
53 Téléchargements

Altmetric

Partager

More