A fully stochastic primal-dual algorithm

Pascal Bianchi, Walid Hachem, Adil Salim

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

A new stochastic primal-dual algorithm for solving a composite optimization problem is proposed. It is assumed that all the functions / operators that enter the optimization problem are given as statistical expectations. These expectations are unknown but revealed across time through i.i.d realizations. The proposed algorithm is proven to converge to a saddle point of the Lagrangian function. In the framework of the monotone operator theory, the convergence proof relies on recent results on the stochastic Forward Backward algorithm involving random monotone operators. An example of convex optimization under stochastic linear constraints is considered.
Original languageEnglish (US)
JournalOptimization Letters
DOIs
StatePublished - Jul 14 2020

Fingerprint

Dive into the research topics of 'A fully stochastic primal-dual algorithm'. Together they form a unique fingerprint.

Cite this