Abstract
To assess the difference between real and synthetic data, Generative Adversarial Networks (GANs) are trained using a distribution discrepancy measure. Three widely employed measures are information-theoretic divergences, integral probability metrics, and Hilbert space discrepancy metrics. We elucidate the theoretical connections between these three popular GAN training criteria and propose a novel procedure, called x2-GAN, that is conceptually simple, stable at training and resistant to mode collapse. Our procedure naturally generalizes to address the problem of simultaneous matching of multiple distributions. Further, we propose a resampling strategy that significantly improves sample quality, by repurpos-ing the trained critic function via an importance weighting mechanism. Experiments show that the proposed procedure improves stability and convergence, and yields state-of-art results on a wide range of generative modeling tasks.
Original language | English (US) |
---|---|
Title of host publication | 35th International Conference on Machine Learning, ICML 2018 |
Publisher | International Machine Learning Society (IMLS)[email protected] |
Pages | 7787-7796 |
Number of pages | 10 |
ISBN (Print) | 9781510867963 |
State | Published - Jan 1 2018 |
Externally published | Yes |