Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Narcissus: A Practical Clean-Label Backdoor Attack with Limited Information

About

Backdoor attacks insert malicious data into a training set so that, during inference time, it misclassifies inputs that have been patched with a backdoor trigger as the malware specified label. For backdoor attacks to bypass human inspection, it is essential that the injected data appear to be correctly labeled. The attacks with such property are often referred to as "clean-label attacks." Existing clean-label backdoor attacks require knowledge of the entire training set to be effective. Obtaining such knowledge is difficult or impossible because training data are often gathered from multiple sources (e.g., face images from different users). It remains a question whether backdoor attacks still present a real threat. This paper provides an affirmative answer to this question by designing an algorithm to mount clean-label backdoor attacks based only on the knowledge of representative examples from the target class. With poisoning equal to or less than 0.5% of the target-class data and 0.05% of the training set, we can train a model to classify test examples from arbitrary classes into the target class when the examples are patched with a backdoor trigger. Our attack works well across datasets and models, even when the trigger presents in the physical world. We explore the space of defenses and find that, surprisingly, our attack can evade the latest state-of-the-art defenses in their vanilla form, or after a simple twist, we can adapt to the downstream defenses. We study the cause of the intriguing effectiveness and find that because the trigger synthesized by our attack contains features as persistent as the original semantic features of the target class, any attempt to remove such triggers would inevitably hurt the model accuracy first.

Yi Zeng, Minzhou Pan, Hoang Anh Just, Lingjuan Lyu, Meikang Qiu, Ruoxi Jia• 2022

Related benchmarks

TaskDatasetResultRank
Backdoor AttackCIFAR-10-S
Clean Attack Drop (CAD)-0.9
13
Backdoor AttackPets
CAD-0.47
13
Backdoor AttackCelebA-S
CAD-0.03
13
Backdoor AttackImageNet 10-S
CAD-0.2
13
Backdoor AttackCars
CAD0.11
13
Backdoor AttackCaltech-101
CAD-47
13
Image GenerationCIFAR-10 (synthetic)
FID9.01
12
Backdoor Image ClassificationCIFAR-10 (test)
BA (DCB)94.2
12
Backdoor AttackBackdoor Attack Evaluation Summary
Poison Rate0.05
10
Showing 9 of 9 rows

Other info

Follow for update