Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Searching for an Effective Defender: Benchmarking Defense against Adversarial Word Substitution

About

Recent studies have shown that deep neural networks are vulnerable to intentionally crafted adversarial examples, and various methods have been proposed to defend against adversarial word-substitution attacks for neural NLP models. However, there is a lack of systematic study on comparing different defense approaches under the same attacking setting. In this paper, we seek to fill the gap of systematic studies through comprehensive researches on understanding the behavior of neural text classifiers trained by various defense methods under representative adversarial attacks. In addition, we propose an effective method to further improve the robustness of neural text classifiers against such attacks and achieved the highest accuracy on both clean and adversarial examples on AGNEWS and IMDB datasets by a significant margin.

Zongyi Li, Jianhan Xu, Jiehang Zeng, Linyang Li, Xiaoqing Zheng, Qi Zhang, Kai-Wei Chang, Cho-Jui Hsieh• 2021

Related benchmarks

TaskDatasetResultRank
Text ClassificationAGNews
Clean Accuracy94
118
Text ClassificationIMDB (test)
CA93.2
79
Sentiment AnalysisSST-2 (test)
Clean Accuracy92.9
50
Sentiment AnalysisIMDB (test)
Clean Accuracy (%)93.2
37
Text ClassificationIMDB
Clean Accuracy94.4
32
Natural Language InferenceQNLI (test)--
27
Text ClassificationIMDB (test)
Clean Accuracy95.3
15
Text ClassificationAGNews (test)
Accuracy (Clean)95.4
15
Text ClassificationQNLI (test)
Accuracy (Clean)92.8
14
Topic ClassificationAG News (test)
Clean Accuracy94.9
8
Showing 10 of 11 rows

Other info

Follow for update