Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Black-box Backdoor Defense via Zero-shot Image Purification

About

Backdoor attacks inject poisoned samples into the training data, resulting in the misclassification of the poisoned input during a model's deployment. Defending against such attacks is challenging, especially for real-world black-box models where only query access is permitted. In this paper, we propose a novel defense framework against backdoor attacks through Zero-shot Image Purification (ZIP). Our framework can be applied to poisoned models without requiring internal information about the model or any prior knowledge of the clean/poisoned samples. Our defense framework involves two steps. First, we apply a linear transformation (e.g., blurring) on the poisoned image to destroy the backdoor pattern. Then, we use a pre-trained diffusion model to recover the missing semantic information removed by the transformation. In particular, we design a new reverse process by using the transformed image to guide the generation of high-fidelity purified images, which works in zero-shot settings. We evaluate our ZIP framework on multiple datasets with different types of attacks. Experimental results demonstrate the superiority of our ZIP framework compared to state-of-the-art backdoor defense baselines. We believe that our results will provide valuable insights for future defense methods for black-box models. Our code is available at https://github.com/sycny/ZIP.

Yucheng Shi, Mengnan Du, Xuansheng Wu, Zihan Guan, Jin Sun, Ninghao Liu• 2023

Related benchmarks

TaskDatasetResultRank
Backdoor DefenseCIFAR10 (test)
ASR40.6
322
Backdoor DefenseCIFAR-10
Attack Success Rate100
78
Visual Question AnsweringOKVQA
ASR99.22
42
Visual Question AnsweringVQA v2
ASR95.31
42
Backdoor DefenseGTSRB 1% poison rate (test)
Clean Accuracy96.2
27
Backdoor DefenseGTSRB
PA0.239
21
Image ClassificationImagenette 256 x 256 10 classes (test)
Classification Accuracy87.26
17
Image ClassificationCIFAR-10 32 x 32 (test)
CA80.1
17
Image ClassificationGTSRB 32 x 32 43 classes (test)
Accuracy (CA)96.18
17
Robotic ManipulationLifting Cube 30 random repositioning trials (test)
CP0.7667
16
Showing 10 of 26 rows

Other info

Code

Follow for update