Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

BFClass: A Backdoor-free Text Classification Framework

About

Backdoor attack introduces artificial vulnerabilities into the model by poisoning a subset of the training data via injecting triggers and modifying labels. Various trigger design strategies have been explored to attack text classifiers, however, defending such attacks remains an open problem. In this work, we propose BFClass, a novel efficient backdoor-free training framework for text classification. The backbone of BFClass is a pre-trained discriminator that predicts whether each token in the corrupted input was replaced by a masked language model. To identify triggers, we utilize this discriminator to locate the most suspicious token from each training sample and then distill a concise set by considering their association strengths with particular labels. To recognize the poisoned subset, we examine the training samples with these identified triggers as the most suspicious token, and check if removing the trigger will change the poisoned model's prediction. Extensive experiments demonstrate that BFClass can identify all the triggers, remove 95% poisoned training samples with very limited false alarms, and achieve almost the same performance as the models trained on the benign training data.

Zichao Li, Dheeraj Mekala, Chengyu Dong, Jingbo Shang• 2021

Related benchmarks

TaskDatasetResultRank
Backdoor DefenseSST-2 (test)
ΔCACC-0.17
12
Backdoor DefenseOLID (test)
ΔCACC0.16
12
Backdoor DefenseAGNews (test)
Delta CACC0.8
12
Backdoor DefenseIMDB (test)
Delta CACC0.01
12
Backdoor Trigger DetectionSST-2 (test)
Precision1
10
Backdoor Trigger DetectionAGNews (test)
Precision60
10
Backdoor Trigger DetectionIMDB (test)
Precision65
10
Backdoor Trigger DetectionOLID
Precision0.38
10
Showing 8 of 8 rows

Other info

Follow for update