On measure concentration of random maximum a-posteriori perturbations

Francesco Orabona, Tamir Hazan, Anand D. Sarwate, Tommi S. Jaakkola

Research output: Chapter in Book/Report/Conference proceedingConference contribution

8 Scopus citations

Abstract

2014 The maximum a-posteriori (MAP) perturbation framework has emerged as a useful approach for inference and learning in high dimensional complex models. By maximizing a randomly perturbed potential function, MAP perturbations generate unbiased samples from the Gibbs distribution. Unfortunately, the computational cost of generating so many high-dimensional random variables can be prohibitive. More efficient algorithms use sequential sampling strategies based on the expected value of low dimensional MAP perturbations. This paper develops new measure concentration inequalities that bound the number of samples needed to estimate such expected values. Applying the general result to MAP perturbations can yield a more efficient algorithm to approximate sampling from the Gibbs distribution. The measure concentration result is of general interest and may be applicable to other areas involving Monte Carlo estimation of expectations.
Original languageEnglish (US)
Title of host publication31st International Conference on Machine Learning, ICML 2014
PublisherInternational Machine Learning Society (IMLS)[email protected]
Pages674-692
Number of pages19
ISBN (Print)9781634393973
StatePublished - Jan 1 2014
Externally publishedYes

Fingerprint

Dive into the research topics of 'On measure concentration of random maximum a-posteriori perturbations'. Together they form a unique fingerprint.

Cite this