Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Robust Neural Machine Translation with Doubly Adversarial Inputs

About

Neural machine translation (NMT) often suffers from the vulnerability to noisy perturbations in the input. We propose an approach to improving the robustness of NMT models, which consists of two parts: (1) attack the translation model with adversarial source examples; (2) defend the translation model with adversarial target inputs to improve its robustness against the adversarial source inputs.For the generation of adversarial inputs, we propose a gradient-based method to craft adversarial examples informed by the translation loss over the clean inputs.Experimental results on Chinese-English and English-German translation tasks demonstrate that our approach achieves significant improvements ($2.8$ and $1.6$ BLEU points) over Transformer on standard clean benchmarks as well as exhibiting higher robustness on noisy data.

Yong Cheng, Lu Jiang, Wolfgang Macherey• 2019

Related benchmarks

TaskDatasetResultRank
Machine Translation (Chinese-to-English)NIST 2003 (MT-03)
BLEU46.5
52
Machine Translation (Chinese-to-English)NIST MT-05 2005
BLEU46.58
42
Machine TranslationNIST MT 06 2006 (test)
BLEU46.95
27
Machine TranslationNIST MT 04 2004 (test)
BLEU0.4739
27
English-German Machine TranslationWMT (newstest2014)
BLEU28.34
19
Machine TranslationNIST Chinese-English MT02 (test)
BLEU47.06
14
Machine TranslationNIST Chinese-English MT08 (test)
BLEU37.38
11
Machine TranslationNIST MT04
BLEU47.4
10
Machine TranslationIWSLT En-Fr 2016 (test)
BLEU39.46
9
Machine TranslationIWSLT English-French 2016 (test2013)
BLEU Score41.76
6
Showing 10 of 14 rows

Other info

Follow for update