Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TransFool: An Adversarial Attack against Neural Machine Translation Models

About

Deep neural networks have been shown to be vulnerable to small perturbations of their inputs, known as adversarial attacks. In this paper, we investigate the vulnerability of Neural Machine Translation (NMT) models to adversarial attacks and propose a new attack algorithm called TransFool. To fool NMT models, TransFool builds on a multi-term optimization problem and a gradient projection step. By integrating the embedding representation of a language model, we generate fluent adversarial examples in the source language that maintain a high level of semantic similarity with the clean samples. Experimental results demonstrate that, for different translation tasks and NMT architectures, our white-box attack can severely degrade the translation quality while the semantic similarity between the original and the adversarial sentences stays high. Moreover, we show that TransFool is transferable to unknown target models. Finally, based on automatic and human evaluations, TransFool leads to improvement in terms of success rate, semantic similarity, and fluency compared to the existing attacks both in white-box and black-box settings. Thus, TransFool permits us to better characterize the vulnerability of NMT models and outlines the necessity to design strong defense mechanisms and more robust NMT systems for real-life applications.

Sahar Sadrizadeh, Ljiljana Dolamic, Pascal Frossard• 2023

Related benchmarks

TaskDatasetResultRank
Text TranslationEn-Zh
BLEU0.81
14
Text TranslationEn-Fr
BLEU Score0.67
14
Text TranslationSST5 Baidu Translate en-fr (test)
BLEU0.51
3
Text TranslationSST5 Ali Translate en-zh (test)
BLEU59
3
Text TranslationEmotion Baidu Translate en-fr (test)
BLEU36
3
Text TranslationEmotion Ali Translate en-zh (test)
BLEU49
3
Showing 6 of 6 rows

Other info

Follow for update