Primal Dual Interpretation of the Proximal Stochastic Gradient Langevin Algorithm

Adil Salim, Peter Richtarik

Research output: Chapter in Book/Report/Conference proceedingConference contribution

19 Scopus citations

Abstract

We consider the task of sampling with respect to a log concave probability distribution. The potential of the target distribution is assumed to be composite, i.e., written as the sum of a smooth convex term, and a nonsmooth convex term possibly taking infinite values. The target distribution can be seen as a minimizer of the Kullback-Leibler divergence defined on the Wasserstein space (i.e., the space of probability measures). In the first part of this paper, we establish a strong duality result for this minimization problem. In the second part of this paper, we use the duality gap arising from the first part to study the complexity of the Proximal Stochastic Gradient Langevin Algorithm (PSGLA), which can be seen as a generalization of the Projected Langevin Algorithm. Our approach relies on viewing PSGLA as a primal dual algorithm and covers many cases where the target distribution is not fully supported. In particular, we show that if the potential is strongly convex, the complexity of PSGLA is O(1/e2) in terms of the 2-Wasserstein distance. In contrast, the complexity of the Projected Langevin Algorithm is O(1/e12) in terms of total variation when the potential is convex.
Original languageEnglish (US)
Title of host publication34th Conference on Neural Information Processing Systems, NeurIPS 2020
PublisherNeural information processing systems foundation
StatePublished - 2020

Fingerprint

Dive into the research topics of 'Primal Dual Interpretation of the Proximal Stochastic Gradient Langevin Algorithm'. Together they form a unique fingerprint.

Cite this