CS-GAN: Cross-Structure Generative Adversarial Networks for Chinese calligraphy translation

作者:

Highlights:

摘要

Generative Adversarial Networks (GANs) have made great progress in cross-domain image translation. In fact, image-to-image translation tasks often encounter structural differences in two domains, such as translation on unpaired Chinese calligraphy dataset. However, existing models can only convert color and texture features and keep the structures unchanged (e.g.: in apples to oranges tasks, these models only convert the color of apples, but maintain the shape of apples). In order to address cross-structure image translation, such as cross-structure translation of Chinese calligraphy, a novel Generative Adversarial Networks (GAN) model, named CS-GAN, is proposed in this paper. In CS-GAN, distribution transform, reparameterization trick and sampling features are used to convert feature maps obtained from domain S to domain T. Then images of domain T are generated through features concatenation. The proposed CS-GAN is verified on three sets of Chinese calligraphic data with structural differences from three famous calligraphers, Yan Zhenqing, Zhao Mengfu and Ouyang Xun. The extensive experimental results show that the proposed CS-GAN successfully transforms the Chinese calligraphy data of different structures and outperforms the state of art models.

论文关键词:Chinese calligraphy,Style transfer,Generative Adversarial Network

论文评审过程:Received 7 March 2021, Revised 20 June 2021, Accepted 21 July 2021, Available online 27 July 2021, Version of Record 30 July 2021.

论文官网地址:https://doi.org/10.1016/j.knosys.2021.107334