Progressive editing with stacked Generative Adversarial Network for multiple facial attribute editing

作者:

Highlights:

摘要

Generative Adversarial Network (GAN) based facial attribute editing has been successfully applied to many real world applications. However, most of existing methods suffer from semantic entanglement and imprecise editing when handling multiple facial attributes. The situation is worse when the samples with minority attribute values are insufficient, and majority attribute values dominate the learning easily. A stacked conditional GAN (cGAN) is proposed in this study aiming at solving these problems. Multiple attribute editing is broken down into several single attribute editing tasks which have been learned by base cGANs individually. Moreover, samples with a minority attribute value are paid more attention in learning. This proposed method not only reduces the difficulty of multiple attribute editing but also mitigates the imbalance problem. The residual image learning is applied to our model to reduce the difficulty of the image generation. The superiority of our model is demonstrated and compared with popular GAN-based facial attribute editing methods experimentally in terms of image quality, editing accuracy and training cost. The results confirm that our proposed model outperforms the other methods, especially in imbalance situations.

论文关键词:

论文评审过程:Received 8 June 2021, Revised 12 December 2021, Accepted 27 December 2021, Available online 31 December 2021, Version of Record 4 February 2022.

论文官网地址:https://doi.org/10.1016/j.cviu.2021.103347