TY - GEN
T1 - SGD
T2 - 36th International Conference on Machine Learning, ICML 2019
AU - Gower, Robert M.
AU - Loizou, Nicolas
AU - Qian, Xun
AU - Sailanbayev, Alibek
AU - Shulgin, Egor
AU - Richtárik, Peter
N1 - Funding Information:
RMG acknowledges the support by a public grant as part of the Investissement d'avenir project, reference ANR-11-LABX-0056-LMH, LabEx LMH, in a joint call with Gas-pard Monge Program for optimization, operations research and their interactions with data sciences.
Publisher Copyright:
© 2019 International Machine Learning Society (IMLS).
PY - 2019
Y1 - 2019
N2 - We propose a general yet simple theorem describing the convergence of SGD under the arbitrary-sampling paradigm. Our theorem describes the convergence of an infinite array of variants of SGD, each of which is associated with a specific probability law governing the data selection rule used to form minibatches. This is the first time such an analysis is performed, and most of our variants of SGD were never explicitly considered in the literature before. Our analysis relies on the recently introduced notion of expected smoothness and does not rely on a uniform bound on the variance of the stochastic gradients. By specializing our theorem to different mini-batching strategies, such as sampling with replacement and independent sampling, we derive exact expressions for the stepsizc as a function of the mini-batch size. With this we can also determine the mini-batch size that optimizes the total complexity, and show explicitly that as the variance of the stochastic gradient evaluated at the minimum grows, so does the optimal mini-batch size. For zero variance, the optimal mini-batch size is one. Moreover, we prove insightful stepsize-switching rules which describe when one should switch from a constant to a decreasing stepsize regime.
AB - We propose a general yet simple theorem describing the convergence of SGD under the arbitrary-sampling paradigm. Our theorem describes the convergence of an infinite array of variants of SGD, each of which is associated with a specific probability law governing the data selection rule used to form minibatches. This is the first time such an analysis is performed, and most of our variants of SGD were never explicitly considered in the literature before. Our analysis relies on the recently introduced notion of expected smoothness and does not rely on a uniform bound on the variance of the stochastic gradients. By specializing our theorem to different mini-batching strategies, such as sampling with replacement and independent sampling, we derive exact expressions for the stepsizc as a function of the mini-batch size. With this we can also determine the mini-batch size that optimizes the total complexity, and show explicitly that as the variance of the stochastic gradient evaluated at the minimum grows, so does the optimal mini-batch size. For zero variance, the optimal mini-batch size is one. Moreover, we prove insightful stepsize-switching rules which describe when one should switch from a constant to a decreasing stepsize regime.
UR - http://www.scopus.com/inward/record.url?scp=85078128007&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85078128007
T3 - 36th International Conference on Machine Learning, ICML 2019
SP - 9090
EP - 9112
BT - 36th International Conference on Machine Learning, ICML 2019
PB - International Machine Learning Society (IMLS)
Y2 - 9 June 2019 through 15 June 2019
ER -