Abstract
We investigate the capabilities and limitations of Gaussian process (GP) models by jointly exploring three complementary directions: (i) scalable and statistically efficient inference; (ii) flexible kernels; and (iii) objective functions for hyperparameter learning alternative to the marginal likelihood. Our approach outperforms all previous GP methods on the MNIST dataset; performs comparatively to kernel-based methods using the RECTANGLES-IMAGE dataset; and breaks the 1% error-rate barrier in GP models on the MNIST8M dataset, while showing unprecedented scalability (8 million observations) in GP classification. Overall, our approach represents a significant breakthrough in kernel methods and GP models, bridging the gap between deep learning and kernel machines.
Original language | English (US) |
---|---|
State | Published - 2017 |
Event | 33rd Conference on Uncertainty in Artificial Intelligence, UAI 2017 - Sydney, Australia Duration: Aug 11 2017 → Aug 15 2017 |
Conference
Conference | 33rd Conference on Uncertainty in Artificial Intelligence, UAI 2017 |
---|---|
Country/Territory | Australia |
City | Sydney |
Period | 08/11/17 → 08/15/17 |
ASJC Scopus subject areas
- Artificial Intelligence