Accelerated variance-reduced methods for saddle-point problems

Ekaterina Borodich, Vladislav Tominin, Yaroslav Tominin, Dmitry Kovalev, Alexander Gasnikov, Pavel Dvurechensky

Research output: Contribution to journalArticlepeer-review


We consider composite minimax optimization problems where the goal is to find a saddle-point of a large sum of non-bilinear objective functions augmented by simple composite regularizers for the primal and dual variables. For such problems, under the average-smoothness assumption, we propose accelerated stochastic variance-reduced algorithms with optimal up to logarithmic factors complexity bounds. In particular, we consider strongly-convex-strongly-concave, convex-strongly-concave, and convex-concave objectives. To the best of our knowledge, these are the first nearly-optimal algorithms for this setting.
Original languageEnglish (US)
Pages (from-to)100048
JournalEURO Journal on Computational Optimization
StatePublished - Oct 18 2022


Dive into the research topics of 'Accelerated variance-reduced methods for saddle-point problems'. Together they form a unique fingerprint.

Cite this