Learning autoencoders with relational regularization

Hongteng Xu, Dixin Luo, Ricardo Henao, Svati Shah, Lawrence Carin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

19 Scopus citations

Abstract

A new algorithmic framework is proposed for learning autoencoders of data distributions. We minimize the discrepancy between the model and target distributions, with a relational regularization on the learnable latent prior. This regularization penalizes the fused Gromov-Wasserstein (FGW) distance between the latent prior and its corresponding posterior, allowing one to flexibly learn a structured prior distribution associated with the generative model. Moreover, it helps co-training of multiple autoencoders even if they have heterogeneous architectures and incomparable latent spaces. We implement the framework with two scalable algorithms, making it applicable for both probabilistic and deterministic autoencoders. Our relational regularized autoencoder (RAE) outperforms existing methods, e.g., the variational autoencoder, Wasserstein autoencoder, and their variants, on generating images. Additionally, our relational co-training strategy for autoencoders achieves encouraging results in both synthesis and real-world multi-view learning tasks. The code is at https://github.com/HongtengXu/ Relational-AutoEncoders.
Original languageEnglish (US)
Title of host publication37th International Conference on Machine Learning, ICML 2020
PublisherInternational Machine Learning Society (IMLS)
Pages10507-10517
Number of pages11
ISBN (Print)9781713821120
StatePublished - Jan 1 2020
Externally publishedYes

Fingerprint

Dive into the research topics of 'Learning autoencoders with relational regularization'. Together they form a unique fingerprint.

Cite this