Go gradient for expectation-based objectives

Yulai Cong, Miaoyun Zhao, Ke Bai, Lawrence Carin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

12 Scopus citations

Abstract

Within many machine learning algorithms, a fundamental problem concerns efficient calculation of an unbiased gradient wrt parameters γ for expectation-based objectives Eqγ(y)[f(y)]. Most existing methods either (i) suffer from high variance, seeking help from (often) complicated variance-reduction techniques; or (ii) they only apply to reparameterizable continuous random variables and employ a reparameterization trick. To address these limitations, we propose a General and One-sample (GO) gradient that (i) applies to many distributions associated with non-reparameterizable continuous or discrete random variables, and (ii) has the same low-variance as the reparameterization trick. We find that the GO gradient often works well in practice based on only one Monte Carlo sample (although one can of course use more samples if desired). Alongside the GO gradient, we develop a means of propagating the chain rule through distributions, yielding statistical back-propagation, coupling neural networks to common random variables.
Original languageEnglish (US)
Title of host publication7th International Conference on Learning Representations, ICLR 2019
PublisherInternational Conference on Learning Representations, ICLR
StatePublished - Jan 1 2019
Externally publishedYes

Fingerprint

Dive into the research topics of 'Go gradient for expectation-based objectives'. Together they form a unique fingerprint.

Cite this