Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Improving Neural Machine Translation Models with Monolingual Data

About

Neural Machine Translation (NMT) has obtained state-of-the art performance for several language pairs, while only using parallel data for training. Target-side monolingual data plays an important role in boosting fluency for phrase-based statistical machine translation, and we investigate the use of monolingual data for NMT. In contrast to previous work, which combines NMT models with separately trained language models, we note that encoder-decoder NMT architectures already have the capacity to learn the same information as a language model, and we explore strategies to train with monolingual data without changing the neural network architecture. By pairing monolingual training data with an automatic back-translation, we can treat it as additional parallel training data, and we obtain substantial improvements on the WMT 15 task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We also show that fine-tuning on in-domain monolingual and parallel data gives substantial improvements for the IWSLT 15 task English->German.

Rico Sennrich, Barry Haddow, Alexandra Birch• 2015

Related benchmarks

TaskDatasetResultRank
Machine TranslationWMT En-De 2014 (test)
BLEU27.82
379
Text ClassificationAG-News
Accuracy89.6
248
Text ClassificationTREC
Accuracy96
179
Machine Translation (Chinese-to-English)NIST 2003 (MT-03)
BLEU47.1
52
Topic ClassificationYahoo
Accuracy68.7
42
Machine Translation (Chinese-to-English)NIST MT-05 2005
BLEU45.69
42
Machine TranslationWMT newstest 2015 (test)
BLEU31.6
31
Machine TranslationWMT14 DE-EN (test)
BLEU31.91
28
Sentiment ClassificationLaptop14
Accuracy84.17
28
Machine TranslationNIST MT 04 2004 (test)
BLEU0.4781
27
Showing 10 of 46 rows

Other info

Follow for update