Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency

About

Deep neural networks (DNNs) are vulnerable to backdoor attacks, where adversaries can maliciously trigger model misclassifications by implanting a hidden backdoor during model training. This paper proposes a simple yet effective input-level backdoor detection (dubbed IBD-PSC) as a `firewall' to filter out malicious testing images. Our method is motivated by an intriguing phenomenon, i.e., parameter-oriented scaling consistency (PSC), where the prediction confidences of poisoned samples are significantly more consistent than those of benign ones when amplifying model parameters. In particular, we provide theoretical analysis to safeguard the foundations of the PSC phenomenon. We also design an adaptive method to select BN layers to scale up for effective detection. Extensive experiments are conducted on benchmark datasets, verifying the effectiveness and efficiency of our IBD-PSC method and its resistance to adaptive attacks. Codes are available at \href{https://github.com/THUYimingLi/BackdoorBox}{BackdoorBox}.

Linshan Hou, Ruili Feng, Zhongyun Hua, Wei Luo, Leo Yu Zhang, Yiming Li• 2024

Related benchmarks

TaskDatasetResultRank
Backdoor DetectionCIFAR-10 imbalanced µ=0.9, ρ=100 (test)
Badnets TPR85.4
13
Backdoor Sample DetectionCIFAR-10 balanced rho=1 (train test)
Badnets TPR99.4
13
Backdoor Sample DetectionCIFAR-10 imbalanced mu=0.9, rho=10 (train test)
Badnets TPR91.2
13
Backdoor Sample DetectionCIFAR-10 imbalanced mu=0.9, rho=200 (train test)
Badnets TPR51.7
13
Backdoor DetectionCIFAR-10 imbalanced µ=0.9, ρ=2 (test)
Badnets TPR95.3
13
Showing 5 of 5 rows

Other info

Follow for update