TY - JOUR
T1 - Unified Analysis of Stochastic Gradient Methods for Composite Convex and Smooth Optimization
AU - Khaled, Ahmed
AU - Sebbouh, Othmane
AU - Loizou, Nicolas
AU - Gower, Robert M.
AU - Richtarik, Peter
N1 - KAUST Repository Item: Exported on 2023-10-03
PY - 2023/9/27
Y1 - 2023/9/27
N2 - We present a unified theorem for the convergence analysis of stochastic gradient algorithms for minimizing a smooth and convex loss plus a convex regularizer. We do this by extending the unified analysis of Gorbunov et al. (in: AISTATS, 2020) and dropping the requirement that the loss function be strongly convex. Instead, we rely only on convexity of the loss function. Our unified analysis applies to a host of existing algorithms such as proximal SGD, variance reduced methods, quantization and some coordinate descent-type methods. For the variance reduced methods, we recover the best known convergence rates as special cases. For proximal SGD, the quantization and coordinate-type methods, we uncover new state-of-the-art convergence rates. Our analysis also includes any form of sampling or minibatching. As such, we are able to determine the minibatch size that optimizes the total complexity of variance reduced methods. We showcase this by obtaining a simple formula for the optimal minibatch size of two variance reduced methods (L-SVRG and SAGA). This optimal minibatch size not only improves the theoretical total complexity of the methods but also improves their convergence in practice, as we show in several experiments.
AB - We present a unified theorem for the convergence analysis of stochastic gradient algorithms for minimizing a smooth and convex loss plus a convex regularizer. We do this by extending the unified analysis of Gorbunov et al. (in: AISTATS, 2020) and dropping the requirement that the loss function be strongly convex. Instead, we rely only on convexity of the loss function. Our unified analysis applies to a host of existing algorithms such as proximal SGD, variance reduced methods, quantization and some coordinate descent-type methods. For the variance reduced methods, we recover the best known convergence rates as special cases. For proximal SGD, the quantization and coordinate-type methods, we uncover new state-of-the-art convergence rates. Our analysis also includes any form of sampling or minibatching. As such, we are able to determine the minibatch size that optimizes the total complexity of variance reduced methods. We showcase this by obtaining a simple formula for the optimal minibatch size of two variance reduced methods (L-SVRG and SAGA). This optimal minibatch size not only improves the theoretical total complexity of the methods but also improves their convergence in practice, as we show in several experiments.
UR - http://hdl.handle.net/10754/663909
UR - https://link.springer.com/10.1007/s10957-023-02297-y
U2 - 10.1007/s10957-023-02297-y
DO - 10.1007/s10957-023-02297-y
M3 - Article
SN - 0022-3239
JO - Journal of Optimization Theory and Applications
JF - Journal of Optimization Theory and Applications
ER -