Incremental focal loss GANs

作者:

Highlights:

摘要

Generative Adversarial Networks (GANs) have achieved inspiring performance in both unsupervised image generation and conditional cross-modal image translation. However, how to generate quality images at an affordable cost is still challenging. We argue that it is the vast number of easy examples that disturb training of GANs, and propose to address this problem by down-weighting losses assigned to easy examples. Our novel Incremental Focal Loss (IFL) progressively focuses training on hard examples and prevents easy examples from overwhelming the generator and discriminator during training. In addition, we propose an enhanced self-attention (ESA) mechanism to boost the representational capacity of the generator. We apply IFL and ESA to a number of unsupervised and conditional GANs, and conduct experiments on various tasks, including face photo-sketch synthesis, map↔aerial-photo translation, single image super-resolution reconstruction, and image generation on CelebA, LSUN, and CIFAR-10. Results show that IFL boosts learning of GANs over existing loss functions. Besides, both IFL and ESA make GANs produce quality images with realistic details in all these tasks, even when no task adaptation is involved.

论文关键词:Generative adversarial networks,Image generation,Image-to-image translation,Face photo-sketch synthesis,Image super-resolution reconstruction

论文评审过程:Received 26 July 2019, Revised 3 December 2019, Accepted 26 December 2019, Available online 22 January 2020, Version of Record 22 January 2020.

论文官网地址:https://doi.org/10.1016/j.ipm.2019.102192