emnlp44

emnlp 2019 论文列表

Proceedings of the 3rd Workshop on Neural Generation and Translation@EMNLP-IJCNLP 2019, Hong Kong, November 4, 2019.

Efficiency through Auto-Sizing: Notre Dame NLP's Submission to the WNGT 2019 Efficiency Task.
Selecting, Planning, and Rewriting: A Modular Approach for Data-to-Document Generation and Translation.
From Research to Production and Back: Ludicrously Fast Neural Machine Translation.
Naver Labs Europe's Systems for the Document-Level Generation and Translation Task at WNGT 2019.
University of Edinburgh's submission to the Document-level Generation and Translation Shared Task.
SYSTRAN @ WNGT 2019: DGT Task.
Monash University's Submissions to the WNGT 2019 Document Translation Task.
Transformer and seq2seq model for Paraphrase Generation.
Learning to Generate Word- and Phrase-Embeddings for Efficient Phrase-Based Neural Machine Translation.
Auto-Sizing the Transformer Network: Improving Speed, Efficiency, and Performance for Low-Resource Machine Translation.
Interrogating the Explanatory Power of Attention in Neural Machine Translation.
Paraphrasing with Large Language Models.
Mixed Multi-Head Self-Attention for Neural Machine Translation.
A Margin-based Loss with Synthetic Negative Samples for Continuous-output Machine Translation.
Big Bidirectional Insertion Representations for Documents.
On the Importance of Word Boundaries in Character-level Neural Machine Translation.
Adaptively Scheduled Multitask Learning: The Case of Low-Resource Neural Machine Translation.
Machine Translation of Restaurant Reviews: New Corpus for Domain Adaptation and Robustness.
Generalization in Generation: A closer look at Exposure Bias.
Enhanced Transformer Model for Data-to-Text Generation.
Unsupervised Evaluation Metrics and Learning Criteria for Non-Parallel Textual Transfer.
Decomposing Textual Information For Style Transfer.
On the Importance of the Kullback-Leibler Divergence Term in Variational Autoencoders for Text Generation.
On the use of BERT for Neural Machine Translation.
Zero-Resource Neural Machine Translation with Monolingual Pivot Data.
Controlled Text Generation for Data Augmentation in Intelligent Artificial Agents.
Making Asynchronous Stochastic Gradient Descent Work for Transformers.
Transformer-based Model for Single Documents Neural Summarization.
Domain Differential Adaptation for Neural Machine Translation.
Generating Diverse Story Continuations with Controllable Semantics.
Generating a Common Question from Multiple Documents using Multi-source Encoder-Decoder Models.
Recycling a Pre-trained BERT Encoder for Neural Machine Translation.
Hello, It's GPT-2 - How Can I Help You? Towards the Use of Pretrained Language Models for Task-Oriented Dialogue Systems.
Findings of the Third Workshop on Neural Generation and Translation.