How Does Lipschitz Regularization Influence GAN Training?

Yipeng Qin, Niloy Mitra, Peter Wonka

Research output: Chapter in Book/Report/Conference proceedingConference contribution

13 Scopus citations

Abstract

Despite the success of Lipschitz regularization in stabilizing GAN training, the exact reason of its effectiveness remains poorly understood. The direct effect of K-Lipschitz regularization is to restrict the L2-norm of the neural network gradient to be smaller than a threshold K (e.g.,) such that. In this work, we uncover an even more important effect of Lipschitz regularization by examining its impact on the loss function: It degenerates GAN loss functions to almost linear ones by restricting their domain and interval of attainable gradient values. Our analysis shows that loss functions are only successful if they are degenerated to almost linear ones. We also show that loss functions perform poorly if they are not degenerated and that a wide range of functions can be used as loss function as long as they are sufficiently degenerated by regularization. Basically, Lipschitz regularization ensures that all loss functions effectively work in the same way. Empirically, we verify our proposition on the MNIST, CIFAR10 and CelebA datasets.
Original languageEnglish (US)
Title of host publicationComputer Vision – ECCV 2020
PublisherSpringer International Publishing
Pages310-326
Number of pages17
ISBN (Print)9783030585167
DOIs
StatePublished - Oct 9 2020

Fingerprint

Dive into the research topics of 'How Does Lipschitz Regularization Influence GAN Training?'. Together they form a unique fingerprint.

Cite this