Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment

About

Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alterations from the original counterparts but can fool the state-of-the-art models. It is helpful to evaluate or even improve the robustness of these models by exposing the maliciously crafted adversarial examples. In this paper, we present TextFooler, a simple but strong baseline to generate natural adversarial text. By applying it to two fundamental natural language tasks, text classification and textual entailment, we successfully attacked three target models, including the powerful pre-trained BERT, and the widely used convolutional and recurrent neural networks. We demonstrate the advantages of this framework in three ways: (1) effective---it outperforms state-of-the-art attacks in terms of success rate and perturbation rate, (2) utility-preserving---it preserves semantic content and grammaticality, and remains correctly classified by humans, and (3) efficient---it generates adversarial text with computational complexity linear to the text length. *The code, pre-trained target models, and test examples are available at https://github.com/jind11/TextFooler.

Di Jin, Zhijing Jin, Joey Tianyi Zhou, Peter Szolovits• 2019

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceSNLI (test)
Accuracy86.7
681
Text-to-SQLSpider (dev)--
100
Text ClassificationIMDB
Clean Accuracy96.8
32
Adversarial AttackGLUE
SST-2 Speedup2.96
32
Natural Language UnderstandingGLUE
SST-2 Speedup2.35
32
Text ClassificationIMDB (test)
Attack Success Rate88.7
27
Adversarial Evasion AttackMGTBench Reuters
ASR29
24
Adversarial Evasion AttackMGTBench Essay
ASR38
24
Adversarial Evasion AttackMGTBench WP
ASR59
24
Adversarial Evasion AttackMGT-Academic Humanity
ASR11
22
Showing 10 of 29 rows

Other info

Follow for update