Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Improving Conditioning in Context-Aware Sequence to Sequence Models

About

Neural sequence to sequence models are well established for applications which can be cast as mapping a single input sequence into a single output sequence. In this work, we focus on cases where generation is conditioned on both a short query and a long context, such as abstractive question answering or document-level translation. We modify the standard sequence-to-sequence approach to make better use of both the query and the context by expanding the conditioning mechanism to intertwine query and context attention. We also introduce a simple and efficient data augmentation method for the proposed model. Experiments on three different tasks show that both changes lead to consistent improvements.

Xinyi Wang, Jason Weston, Michael Auli, Yacine Jernite• 2019

Related benchmarks

TaskDatasetResultRank
Long-form Question AnsweringELI5
ROUGE-L14.63
27
Document-Level Machine TranslationIWSLT Fr-En 2010 (test)
BLEU37.3
15
Knowledge Grounded DialogueWizards of Wikipedia
F1 Score35.69
6
Showing 3 of 3 rows

Other info

Follow for update