High-Probability Bounds for Stochastic Optimization and Variational Inequalities: the Case of Unbounded Variance

Abdurakhmon Sadiev, Marina Danilova, Eduard Gorbunov*, Samuel Horváth, Gauthier Gidel, Pavel Dvurechensky, Alexander Gasnikov, Peter Richtárik

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

6 Scopus citations

Abstract

During recent years the interest of optimization and machine learning communities in high-probability convergence of stochastic optimization methods has been growing. One of the main reasons for this is that high-probability complexity bounds are more accurate and less studied than in-expectation ones. However, SOTA high-probability non-asymptotic convergence results are derived under strong assumptions such as the boundedness of the gradient noise variance or of the objective's gradient itself. In this paper, we propose several algorithms with high-probability convergence results under less restrictive assumptions. In particular, we derive new high-probability convergence results under the assumption that the gradient/operator noise has bounded central α-th moment for α ∈ (1, 2] in the following setups: (i) smooth non-convex/Polyak-Łojasiewicz/convex/strongly convex/quasi-strongly convex minimization problems, (ii) Lipschitz/star-cocoercive and monotone/quasi-strongly monotone variational inequalities. These results justify the usage of the considered methods for solving problems that do not fit standard functional classes studied in stochastic optimization.

Original languageEnglish (US)
Pages29563-29648
Number of pages86
StatePublished - 2023
Event40th International Conference on Machine Learning, ICML 2023 - Honolulu, United States
Duration: Jul 23 2023Jul 29 2023

Conference

Conference40th International Conference on Machine Learning, ICML 2023
Country/TerritoryUnited States
CityHonolulu
Period07/23/2307/29/23

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'High-Probability Bounds for Stochastic Optimization and Variational Inequalities: the Case of Unbounded Variance'. Together they form a unique fingerprint.

Cite this