Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ONION: A Simple and Effective Defense Against Textual Backdoor Attacks

About

Backdoor attacks are a kind of emergent training-time threat to deep neural networks (DNNs). They can manipulate the output of DNNs and possess high insidiousness. In the field of natural language processing, some attack methods have been proposed and achieve very high attack success rates on multiple popular models. Nevertheless, there are few studies on defending against textual backdoor attacks. In this paper, we propose a simple and effective textual backdoor defense named ONION, which is based on outlier word detection and, to the best of our knowledge, is the first method that can handle all the textual backdoor attack situations. Experiments demonstrate the effectiveness of our model in defending BiLSTM and BERT against five different backdoor attacks. All the code and data of this paper can be obtained at https://github.com/thunlp/ONION.

Fanchao Qi, Yangyi Chen, Mukai Li, Yuan Yao, Zhiyuan Liu, Maosong Sun• 2020

Related benchmarks

TaskDatasetResultRank
Sentiment ClassificationSST2 (test)
Accuracy89.18
214
Sentiment ClassificationIMDB (test)--
144
Text ClassificationSST-2
Accuracy94.01
129
Poisoned sample detectionTrojAI round 6 (test)
Precision0.96
96
Backdoor DefenseAGNews
Attack Success Rate31.59
81
Backdoor DefenseAverage of four datasets (test)
Accuracy87.42
70
Bias DefenseAverage of four datasets (test)
Accuracy87.82
56
Backdoor DefenseCR
Clean Accuracy (CA)92.64
54
Sentiment AnalysisCR
CA90.96
54
Sentiment AnalysisSST-2 (test)
Clean Accuracy91.71
50
Showing 10 of 42 rows

Other info

Follow for update