Understanding Back-Translation at Scale
About
An effective method to improve neural machine translation with monolingual data is to augment the parallel training corpus with back-translations of target language sentences. This work broadens the understanding of back-translation and investigates a number of methods to generate synthetic source sentences. We find that in all but resource poor settings back-translations obtained via sampling or noised beam outputs are most effective. Our analysis shows that sampling or noisy synthetic data gives a much stronger training signal than data generated by beam or greedy search. We also compare how synthetic data compares to genuine bitext and study various domain effects. Finally, we scale to hundreds of millions of monolingual sentences and achieve a new state of the art of 35 BLEU on the WMT'14 English-German test set.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Machine Translation | WMT 2014 (test) | BLEU45.6 | 100 | |
| Machine Translation | WMT16 English-German (test) | BLEU41.2 | 58 | |
| Machine Translation (Chinese-to-English) | NIST 2003 (MT-03) | BLEU46.93 | 52 | |
| Machine Translation (Chinese-to-English) | NIST MT-05 2005 | BLEU46.81 | 42 | |
| Machine Translation | WMT English-French 2014 (test) | BLEU45.6 | 41 | |
| Machine Translation | WMT16 German-English (test) | BLEU40.2 | 39 | |
| Machine Translation | WMT14 English-French (newstest2014) | BLEU45.6 | 39 | |
| Machine Translation | NIST MT 04 2004 (test) | BLEU0.478 | 27 | |
| Machine Translation | NIST MT 06 2006 (test) | BLEU46.2 | 27 | |
| Machine Translation | WMT Original 2014-2018 (test) | BLEU36.6 | 26 |