Robust Neural Machine Translation with Doubly Adversarial Inputs
About
Neural machine translation (NMT) often suffers from the vulnerability to noisy perturbations in the input. We propose an approach to improving the robustness of NMT models, which consists of two parts: (1) attack the translation model with adversarial source examples; (2) defend the translation model with adversarial target inputs to improve its robustness against the adversarial source inputs.For the generation of adversarial inputs, we propose a gradient-based method to craft adversarial examples informed by the translation loss over the clean inputs.Experimental results on Chinese-English and English-German translation tasks demonstrate that our approach achieves significant improvements ($2.8$ and $1.6$ BLEU points) over Transformer on standard clean benchmarks as well as exhibiting higher robustness on noisy data.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Machine Translation (Chinese-to-English) | NIST 2003 (MT-03) | BLEU46.5 | 52 | |
| Machine Translation (Chinese-to-English) | NIST MT-05 2005 | BLEU46.58 | 42 | |
| Machine Translation | NIST MT 06 2006 (test) | BLEU46.95 | 27 | |
| Machine Translation | NIST MT 04 2004 (test) | BLEU0.4739 | 27 | |
| English-German Machine Translation | WMT (newstest2014) | BLEU28.34 | 19 | |
| Machine Translation | NIST Chinese-English MT02 (test) | BLEU47.06 | 14 | |
| Machine Translation | NIST Chinese-English MT08 (test) | BLEU37.38 | 11 | |
| Machine Translation | NIST MT04 | BLEU47.4 | 10 | |
| Machine Translation | IWSLT En-Fr 2016 (test) | BLEU39.46 | 9 | |
| Machine Translation | IWSLT English-French 2016 (test2013) | BLEU Score41.76 | 6 |