Generative Adversarial Networks and Continual Learning

Kevin J Liang, Chunyuan Li, Guoyin Wang, Lawrence Carin

Research output: Contribution to journalArticlepeer-review

Abstract

There is a strong emphasis in the continual learning literature on sequential classification experiments, where each task bares little semblance to previous ones. While certainly a form of continual learning, such tasks do not accurately represent many continual learning problems of the real-world, where the data distribution often evolves slowly over time. We propose using Generative Adversarial Networks (GANs) as a potential source for generating potentially unlimited datasets of this nature. We also identify that the dynamics of GAN training naturally constitute a continual learning problem, and show that leveraging continual learning methods can improve performance. As such, we show that techniques from both continual learning and GAN, typically studied separately, can be used to each other’s benefit.
Original languageEnglish (US)
Pages (from-to)1-10
Number of pages10
JournalNips
Issue numberNips 2018
StatePublished - 2018
Externally publishedYes

Fingerprint

Dive into the research topics of 'Generative Adversarial Networks and Continual Learning'. Together they form a unique fingerprint.

Cite this