Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Fully Non-autoregressive Neural Machine Translation: Tricks of the Trade

About

Fully non-autoregressive neural machine translation (NAT) is proposed to simultaneously predict tokens with single forward of neural networks, which significantly reduces the inference latency at the expense of quality drop compared to the Transformer baseline. In this work, we target on closing the performance gap while maintaining the latency advantage. We first inspect the fundamental issues of fully NAT models, and adopt dependency reduction in the learning space of output tokens as the basic guidance. Then, we revisit methods in four different aspects that have been proven effective for improving NAT models, and carefully combine these techniques with necessary modifications. Our extensive experiments on three translation benchmarks show that the proposed system achieves the new state-of-the-art results for fully NAT models, and obtains comparable performance with the autoregressive and iterative NAT systems. For instance, one of the proposed models achieves 27.49 BLEU points on WMT14 En-De with approximately 16.5X speed up at inference time.

Jiatao Gu, Xiang Kong• 2020

Related benchmarks

TaskDatasetResultRank
Machine TranslationWMT En-De 2014 (test)
BLEU27.2
379
Machine TranslationWMT Ro-En 2016 (test)
BLEU34.16
82
Machine TranslationWMT De-En 14 (test)
BLEU31.4
59
Machine TranslationWMT16 EN-RO (test)
BLEU33.71
56
Machine TranslationWMT14 DE-EN (test)
BLEU31.39
28
Machine TranslationWMT16 Ro-En (test)
BLEU34.2
27
Machine TranslationWMT'16 En-Ro (test)
BLEU33.7
18
Showing 7 of 7 rows

Other info

Follow for update