A deep generative deconvolutional image model

Yunchen Pu, Xin Yuan, Andrew Stevens, Chunyuan Li, Lawernce Carin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

26 Scopus citations

Abstract

A deep generative model is developed for representation and analysis of images, based on a hierarchical convolutional dictionary-learning framework. Stochastic unpooling is employed to link consecutive layers in the model, yielding top-down image generation. A Bayesian support vector machine is linked to the top-layer features, yielding max-margin discrimination. Deep deconvolutional inference is employed when testing, to infer the latent features, and the top-layer features are connected with the max-margin classifier for discrimination tasks. The model is efficiently trained using a Monte Carlo expectation-maximization (MCEM) algorithm; the algorithm is implemented on graphical processor units (GPU) to enable large-scale learning, and fast testing. Excellent results are obtained on several benchmark datasets, including ImageNet, demonstrating that the proposed model achieves results that are highly competitive with similarly sized convolutional neural networks.
Original languageEnglish (US)
Title of host publicationProceedings of the 19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016
PublisherPMLR
Pages741-750
Number of pages10
StatePublished - Jan 1 2016
Externally publishedYes

Fingerprint

Dive into the research topics of 'A deep generative deconvolutional image model'. Together they form a unique fingerprint.

Cite this