Semi-Autoregressive Training Improves Mask-Predict Decoding
About
The recently proposed mask-predict decoding algorithm has narrowed the performance gap between semi-autoregressive machine translation models and the traditional left-to-right approach. We introduce a new training method for conditional masked language models, SMART, which mimics the semi-autoregressive behavior of mask-predict, producing training examples that contain model predictions as part of their inputs. Models trained with SMART produce higher-quality translations when using mask-predict decoding, effectively closing the remaining performance gap with fully autoregressive models.
Marjan Ghazvininejad, Omer Levy, Luke Zettlemoyer• 2020
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Machine Translation | WMT En-De 2014 (test) | BLEU18.6 | 379 | |
| Machine Translation | WMT De-En 14 (test) | BLEU23.8 | 59 |
Showing 2 of 2 rows