Convergence analysis of inexact randomized iterative methods

Nicolas Loizou, Peter Richtarik

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

In this paper we present a convergence rate analysis of inexact variants of several randomized iterative methods for solving three closely related problems: a convex stochastic quadratic optimization problem, a best approximation problem, and its dual, a concave quadratic maximization problem. Among the methods studied are stochastic gradient descent, stochastic Newton, stochastic proximal point, and stochastic subspace ascent. A common feature of these methods is that in their update rule a certain subproblem needs to be solved exactly. We relax this requirement by allowing for the subproblem to be solved inexactly. We provide iteration complexity results under several assumptions on the inexactness error. Inexact variants of many popular and some more exotic methods, including randomized block Kaczmarz, Gaussian block Kaczmarz, and randomized block coordinate descent, can be cast as special cases. Numerical experiments demonstrate the benefits of allowing inexactness.
Original languageEnglish (US)
Pages (from-to)A3979-A4016
Number of pages1
JournalSIAM Journal on Scientific Computing
Volume42
Issue number6
DOIs
StatePublished - Dec 15 2020

Fingerprint

Dive into the research topics of 'Convergence analysis of inexact randomized iterative methods'. Together they form a unique fingerprint.

Cite this