Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Scaling Neural Machine Translation

About

Sequence to sequence learning models still require several days to reach state of the art performance on large benchmark datasets using a single machine. This paper shows that reduced precision and large batch training can speedup training by nearly 5x on a single 8-GPU machine with careful tuning and implementation. On WMT'14 English-German translation, we match the accuracy of Vaswani et al. (2017) in under 5 hours when training on 8 GPUs and we obtain a new state of the art of 29.3 BLEU after training for 85 minutes on 128 GPUs. We further improve these results to 29.8 BLEU by training on the much larger Paracrawl dataset. On the WMT'14 English-French task, we obtain a state-of-the-art BLEU of 43.2 in 8.5 hours on 128 GPUs.

Myle Ott, Sergey Edunov, David Grangier, Michael Auli• 2018

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet A
Top-1 Acc3.9
553
Image ClassificationImageNet-R
Top-1 Acc38.8
474
Machine TranslationWMT En-De 2014 (test)
BLEU29.3
379
Machine TranslationWMT En-Fr 2014 (test)
BLEU43.2
237
Machine TranslationWMT English-German 2014 (test)
BLEU29.3
136
Machine TranslationWMT 2014 (test)
BLEU29.3
100
Machine TranslationWMT En-De '14
BLEU28.6
89
Machine TranslationWMT14 En-De newstest2014 (test)
BLEU29.3
65
Machine TranslationWMT en-fr 14
BLEU Score43.2
56
Machine Translation (Chinese-to-English)NIST 2003 (MT-03)
BLEU47.5
52
Showing 10 of 35 rows

Other info

Code

Follow for update