Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Backdoor Attack in the Physical World

About

Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs), such that the prediction of infected models will be maliciously changed if the hidden backdoor is activated by the attacker-defined trigger. Currently, most existing backdoor attacks adopted the setting of static trigger, $i.e.,$ triggers across the training and testing images follow the same appearance and are located in the same area. In this paper, we revisit this attack paradigm by analyzing trigger characteristics. We demonstrate that this attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training. As such, those attacks are far less effective in the physical world, where the location and appearance of the trigger in the digitized image may be different from that of the one used for training. Moreover, we also discuss how to alleviate such vulnerability. We hope that this work could inspire more explorations on backdoor properties, to help the design of more advanced backdoor attack and defense methods.

Yiming Li, Tongqing Zhai, Yong Jiang, Zhifeng Li, Shu-Tao Xia• 2021

Related benchmarks

TaskDatasetResultRank
Backdoor DefenseCIFAR-10
Attack Success Rate49.5
78
Backdoor DefenseGTSRB
PA0.051
21
Image ClassificationGTSRB 32 x 32 43 classes (test)
Accuracy (CA)97.41
17
Image ClassificationCIFAR-10 32 x 32 (test)
CA82.84
17
Image ClassificationImagenette 256 x 256 10 classes (test)
Classification Accuracy90.21
17
Backdoor AttackBackdoor Attack Evaluation Summary
Poison Rate0.5
10
Backdoor DefenseFashion MNIST
Clean Accuracy17.5
8
Showing 7 of 7 rows

Other info

Follow for update