SGD: General Analysis and Improved Rates - Télécom Paris
Communication Dans Un Congrès Année : 2019

SGD: General Analysis and Improved Rates

Résumé

We propose a general yet simple theorem describing the convergence of SGD under the arbitrary sampling paradigm. Our theorem describes the convergence of an infinite array of variants of SGD, each of which is associated with a specific probability law governing the data selection rule used to form minibatches. This is the first time such an analysis is performed, and most of our variants of SGD were never explicitly considered in the literature before. Our analysis relies on the recently introduced notion of expected smoothness and does not rely on a uniform bound on the variance of the stochastic gradients. By specializing our theorem to different mini-batching strategies , such as sampling with replacement and independent sampling, we derive exact expressions for the stepsize as a function of the mini-batch size. With this we can also determine the mini-batch size that optimizes the total complexity, and show explicitly that as the variance of the stochastic gradient evaluated at the minimum grows, so does the optimal mini-batch size. For zero variance, the optimal mini-batch size is one. Moreover, we prove insightful stepsize-switching rules which describe when one should switch from a constant to a decreasing stepsize regime.
Fichier principal
Vignette du fichier
qian19b.pdf (898.43 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02365318 , version 1 (15-11-2019)

Identifiants

  • HAL Id : hal-02365318 , version 1

Citer

Robert M Gower, Nicolas Loizou, Xun Qian, Alibek Sailanbayev, Egor Shulgin, et al.. SGD: General Analysis and Improved Rates. International Conference on Machine Learning, Jun 2019, Los Angeles, United States. ⟨hal-02365318⟩
117 Consultations
219 Téléchargements

Partager

More