Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ONION: A Simple and Effective Defense Against Textual Backdoor Attacks

About

Backdoor attacks are a kind of emergent training-time threat to deep neural networks (DNNs). They can manipulate the output of DNNs and possess high insidiousness. In the field of natural language processing, some attack methods have been proposed and achieve very high attack success rates on multiple popular models. Nevertheless, there are few studies on defending against textual backdoor attacks. In this paper, we propose a simple and effective textual backdoor defense named ONION, which is based on outlier word detection and, to the best of our knowledge, is the first method that can handle all the textual backdoor attack situations. Experiments demonstrate the effectiveness of our model in defending BiLSTM and BERT against five different backdoor attacks. All the code and data of this paper can be obtained at https://github.com/thunlp/ONION.

Fanchao Qi, Yangyi Chen, Mukai Li, Yuan Yao, Zhiyuan Liu, Maosong Sun• 2020

Related benchmarks

TaskDatasetResultRank
Sentiment ClassificationSST2 (test)
Accuracy89.18
233
Arithmetic ReasoningGSM8K--
173
Sentiment ClassificationIMDB (test)--
144
Text ClassificationSST-2
Accuracy94.01
129
Backdoor DefenseAGNews
Attack Success Rate6.75
105
Poisoned sample detectionTrojAI round 6 (test)
Precision0.96
96
Backdoor DefenseAverage of four datasets (test)
Accuracy87.42
76
Topic ClassificationAG's News
ASR96.23
70
Question AnsweringNQ
ASR98.9
70
Backdoor DefenseSST-2
CACC87.81
65
Showing 10 of 69 rows

Other info

Follow for update