Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Sequence-to-Sequence Learning as Beam-Search Optimization

About

Sequence-to-Sequence (seq2seq) modeling has rapidly become an important general-purpose NLP tool that has proven effective for many text-generation and sequence-labeling tasks. Seq2seq builds on deep neural language modeling and inherits its remarkable accuracy in estimating local, next-word distributions. In this work, we introduce a model and beam-search training scheme, based on the work of Daume III and Marcu (2005), that extends seq2seq to learn global sequence scores. This structured approach avoids classical biases associated with local training and unifies the training loss with the test-time usage, while preserving the proven model architecture of seq2seq and its efficient training approach. We show that our system outperforms a highly-optimized attention-based seq2seq system and other baselines on three different sequence to sequence tasks: word ordering, parsing, and machine translation.

Sam Wiseman, Alexander M. Rush• 2016

Related benchmarks

TaskDatasetResultRank
Machine TranslationIWSLT De-En 2014 (test)
BLEU25.48
146
Machine TranslationIWSLT German-to-English '14 (test)
BLEU Score26.36
110
Dependency ParsingEnglish PTB Stanford Dependencies (test)
UAS91.57
76
Machine TranslationIWSLT 2014
BLEU25.48
20
Word OrderingPenn Treebank (test)
BLEU34.5
11
Showing 5 of 5 rows

Other info

Code

Follow for update