Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Sequence Level Training with Recurrent Neural Networks

About

Many natural language processing applications use language models to generate text. These models are typically trained to predict the next word in a sequence, given the previous words and some context such as an image. However, at test time the model is expected to generate the entire sequence from scratch. This discrepancy makes generation brittle, as errors may accumulate along the way. We address this issue by proposing a novel sequence level training algorithm that directly optimizes the metric used at test time, such as BLEU or ROUGE. On three different tasks, our approach outperforms several strong baselines for greedy generation. The method is also competitive when these baselines employ beam search, while being several times faster.

Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, Wojciech Zaremba• 2015

Related benchmarks

TaskDatasetResultRank
Machine TranslationWMT En-Fr 2014 (test)
BLEU30.71
237
Machine TranslationIWSLT De-En 2014 (test)
BLEU20.73
146
Machine TranslationIWSLT German-to-English '14 (test)
BLEU Score21.83
110
Showing 3 of 3 rows

Other info

Code

Follow for update