Monotonic alignments for summarization

作者:

Highlights:

摘要

Summarization is the task that creates a summary with the major points of the original document. Deep learning plays an important role in both abstractive and extractive summary generations. While a number of models show that combining the two gives good results, this paper focuses on a pure abstractive method to generate good summaries. Our model is a stacked RNN network with a monotonic alignment mechanism. Monotonic alignment has an advantage because it produces the context that is in the same sequence as the original document, at the same time eliminating repeating sequences. To obtain monotonic alignment, this paper proposes two energies that are calculated using only the previous alignment state. We use sub-word method to reduce the rate of producing OOVs(Out of Vocabulary). The dropout is used for generalization and the residual connection to overcome gradient vanishing. We experiment on CNN/daily new and Reddits dataset. Our method out-performs the previous models with monotonic alignment by 4 ROUGE-1 points and achieves the results comparable to state of the art.

论文关键词:Summarization,Monotonic,Alignment,Attention

论文评审过程:Received 10 January 2019, Revised 1 October 2019, Accepted 8 December 2019, Available online 14 December 2019, Version of Record 24 February 2020.

论文官网地址:https://doi.org/10.1016/j.knosys.2019.105363