Lp-WGAN: Using Lp-norm normalization to stabilize Wasserstein generative adversarial networks

作者:

Highlights:

摘要

Wasserstein generative adversarial networks (Wasserstein GANs, WGAN) improve the performance of GANs significantly by imposing the Lipschitz constraints on the critic, which is implemented by weight clipping. In this work, we argue that weight clipping could result in a side effect called area collapse by modifying orientations of weights heavily. To fix this issue, a novel method called Lp-WGAN is presented, where lp-norm normalization is employed to impose the constraints. This method restricts the searching space of weights within a low-dimensional manifold and focuses on searching orientations of weights. Experiments on toy datasets show that Lp-WGAN could spread probability mass and find the underlying distribution earlier than WGAN with weight clipping. Results on the LSUN bedroom dataset and CIFAR-10 dataset show that the proposed method could stabilize training better, generate competitive images earlier and get higher evaluation scores.

论文关键词:Generative adversarial networks,Lipschitz constraints,Normalization,Deep learning

论文评审过程:Received 9 January 2018, Revised 3 August 2018, Accepted 5 August 2018, Available online 6 August 2018, Version of Record 31 October 2018.

论文官网地址:https://doi.org/10.1016/j.knosys.2018.08.004