Topic-guided variational autoencoders for text generation

Wenlin Wang, Zhe Gan, Hongteng Xu, Ruiyi Zhang, Guoyin Wang, Dinghan Shen, Changyou Chen, Lawrence Carin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

59 Scopus citations

Abstract

We propose a topic-guided variational autoencoder (TGVAE) model for text generation. Distinct from existing variational autoencoder (VAE) based approaches, which assume a simple Gaussian prior for the latent code, our model specifies the prior as a Gaussian mixture model (GMM) parametrized by a neural topic module. Each mixture component corresponds to a latent topic, which provides guidance to generate sentences under the topic. The neural topic module and the VAE-based neural sequence module in our model are learned jointly. In particular, a sequence of invertible Householder transformations is applied to endow the approximate posterior of the latent code with high flexibility during model inference. Experimental results show that our TGVAE outperforms alternative approaches on both unconditional and conditional text generation, which can generate semantically-meaningful sentences with various topics.
Original languageEnglish (US)
Title of host publicationNAACL HLT 2019 - 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference
PublisherAssociation for Computational Linguistics (ACL)
Pages166-177
Number of pages12
ISBN (Print)9781950737130
StatePublished - Jan 1 2019
Externally publishedYes

Fingerprint

Dive into the research topics of 'Topic-guided variational autoencoders for text generation'. Together they form a unique fingerprint.

Cite this