Distributional discrepancy: A metric for unconditional text generation

作者:

Highlights:

摘要

The purpose of unconditional text generation is to train a model with real sentences, then generate novel sentences of the same quality and diversity as the training data. However, when different metrics are used for comparing the methods of unconditional text generation, contradictory conclusions are drawn. The difficulty is that both the diversity and quality of the sample should be considered simultaneously when the models are evaluated. To solve this problem, a novel metric of distributional discrepancy (DD) is designed to evaluate generators based on the discrepancy between the generated and real training sentences. However, it cannot compute the DD directly because the distribution of real sentences is unavailable. Thus, we propose a method for estimating the DD by training a neural-network-based text classifier. For comparison, three existing metrics, bi-lingual evaluation understudy (BLEU) versus self-BLEU, language model score versus reverse language model score, and Fréchet embedding distance, along with the proposed DD, are used to evaluate two popular generative models of long short-term memory and generative pretrained transformer 2 on both synthetic and real data. Experimental results show that DD is significantly better than the three existing metrics for ranking these generative models.

论文关键词:Unconditional text generation,Evaluation metric,Text classifier

论文评审过程:Received 27 June 2020, Revised 30 January 2021, Accepted 2 February 2021, Available online 6 February 2021, Version of Record 15 February 2021.

论文官网地址:https://doi.org/10.1016/j.knosys.2021.106850