PCAE: A framework of plug-in conditional auto-encoder for controllable text generation

作者:

Highlights:

摘要

Controllable text generation has taken a gigantic step forward these days. Yet existing methods are either constrained in a one-off pattern or not efficient enough for receiving multiple conditions at every generation stage. We propose a model-agnostic framework Plug-in Conditional Auto-Encoder for Controllable Text Generation (PCAE) towards flexible and semi-supervised text generation. Our framework is “plug-and-play” with partial parameters to be fine-tuned in the pre-trained model (less than a half). Crucial to the success of PCAE is the proposed broadcasting label fusion network for navigating the global latent code to a specified local and confined space. Visualization of the local latent prior well confirms the primary devotion in hidden space of the proposed model. Moreover, extensive experiments across five related generation tasks (from 2 conditions up to 10 conditions) on both RNN-based and pre-trained BART [26] based auto-encoders reveal the high capability of PCAE, which enables generation that is highly manipulable, syntactically diverse and time-saving with minimum labeled samples. We will release our code at https://github.com/ImKeTT/pcae.

论文关键词:Controllable text generation,Plug-and-play,Model-agnostic,Transformers

论文评审过程:Received 17 March 2022, Revised 19 August 2022, Accepted 21 August 2022, Available online 6 September 2022, Version of Record 16 September 2022.

论文官网地址:https://doi.org/10.1016/j.knosys.2022.109766