Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization

Samuel Horvath, Lihua Lei, Peter Richtarik, Michael I. Jordan

Research output: Contribution to journalArticlepeer-review


Adaptivity is an important yet under-studied property in modern optimization theory. The gap between the state-of-the-art theory and the current practice is striking in that algorithms with desirable theoretical guarantees typically involve drastically different settings of hyperparameters, such as step size schemes and batch sizes, in different regimes. Despite the appealing theoretical results, such divisive strategies provide little, if any, insight to practitioners to select algorithms that work broadly without tweaking the hyperparameters. In this work, blending the “geometrization” technique introduced by [L. Lei and M. I. Jordan, Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, 2017, pp. 148--156] and the SARAH algorithm of [L. M. Nguyen, J. Liu, K. Scheinberg, and M. Takáč, Proceedings of the 34th International Conference on Machine Learning, 2017, pp. 2613--2621], we propose the geometrized SARAH algorithm for nonconvex finite-sum and stochastic optimization. Our algorithm is proved to achieve adaptivity to both the magnitude of the target accuracy and the Polyak--Łojasiewicz (PL) constant, if present. In addition, it achieves the best-available convergence rate for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
Original languageEnglish (US)
Pages (from-to)634-648
Number of pages15
Issue number2
StatePublished - May 12 2022


Dive into the research topics of 'Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization'. Together they form a unique fingerprint.

Cite this