MCMC for variationally sparse Gaussian processes

James Hensman, Alexander G. De Matthews, Maurizio Filippone, Zoubin Ghahramani

Research output: Contribution to conferencePaperpeer-review

79 Scopus citations


Gaussian process (GP) models form a core part of probabilistic machine learning. Considerable research effort has been made into attacking three issues with GP models: how to compute efficiently when the number of data is large; how to approximate the posterior when the likelihood is not Gaussian and how to estimate covariance function parameter posteriors. This paper simultaneously addresses these, using a variational approximation to the posterior which is sparse in support of the function but otherwise free-form. The result is a Hybrid Monte-Carlo sampling scheme which allows for a non-Gaussian approximation over the function values and covariance parameters simultaneously, with efficient computations based on inducing-point sparse GPs. Code to replicate each experiment in this paper is available at

Original languageEnglish (US)
Number of pages9
StatePublished - 2015
Event29th Annual Conference on Neural Information Processing Systems, NIPS 2015 - Montreal, Canada
Duration: Dec 7 2015Dec 12 2015


Other29th Annual Conference on Neural Information Processing Systems, NIPS 2015

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing


Dive into the research topics of 'MCMC for variationally sparse Gaussian processes'. Together they form a unique fingerprint.

Cite this