SliderGAN: Synthesizing Expressive Face Images by Sliding 3D Blendshape Parameters

作者:Evangelos Ververas, Stefanos Zafeiriou

摘要

Image-to-image (i2i) translation is the dense regression problem of learning how to transform an input image into an output using aligned image pairs. Remarkable progress has been made in i2i translation with the advent of deep convolutional neural networks and particular using the learning paradigm of generative adversarial networks (GANs). In the absence of paired images, i2i translation is tackled with one or multiple domain transformations (i.e., CycleGAN, StarGAN etc.). In this paper, we study the problem of image-to-image translation, under a set of continuous parameters that correspond to a model describing a physical process. In particular, we propose the SliderGAN which transforms an input face image into a new one according to the continuous values of a statistical blendshape model of facial motion. We show that it is possible to edit a facial image according to expression and speech blendshapes, using sliders that control the continuous values of the blendshape model. This provides much more flexibility in various tasks, including but not limited to face editing, expression transfer and face neutralisation, comparing to models based on discrete expressions or action units.

论文关键词:GAN, Image translation, Facial expression synthesis, Speech synthesis, Blendshape models, Action units, 3DMM fitting, Relativistic discriminator, Emotionet, 4DFAB, LRW

论文评审过程:

论文官网地址:https://doi.org/10.1007/s11263-020-01338-7