ONION: A Simple and Effective Defense Against Textual Backdoor Attacks
About
Backdoor attacks are a kind of emergent training-time threat to deep neural networks (DNNs). They can manipulate the output of DNNs and possess high insidiousness. In the field of natural language processing, some attack methods have been proposed and achieve very high attack success rates on multiple popular models. Nevertheless, there are few studies on defending against textual backdoor attacks. In this paper, we propose a simple and effective textual backdoor defense named ONION, which is based on outlier word detection and, to the best of our knowledge, is the first method that can handle all the textual backdoor attack situations. Experiments demonstrate the effectiveness of our model in defending BiLSTM and BERT against five different backdoor attacks. All the code and data of this paper can be obtained at https://github.com/thunlp/ONION.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Sentiment Classification | SST2 (test) | Accuracy89.18 | 214 | |
| Sentiment Classification | IMDB (test) | -- | 144 | |
| Text Classification | SST-2 | Accuracy94.01 | 129 | |
| Poisoned sample detection | TrojAI round 6 (test) | Precision0.96 | 96 | |
| Backdoor Defense | AGNews | Attack Success Rate31.59 | 81 | |
| Backdoor Defense | Average of four datasets (test) | Accuracy87.42 | 70 | |
| Bias Defense | Average of four datasets (test) | Accuracy87.82 | 56 | |
| Backdoor Defense | CR | Clean Accuracy (CA)92.64 | 54 | |
| Sentiment Analysis | CR | CA90.96 | 54 | |
| Sentiment Analysis | SST-2 (test) | Clean Accuracy91.71 | 50 |