Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

STRIP: A Defence Against Trojan Attacks on Deep Neural Networks

About

A recent trojan attack on deep neural network (DNN) models is one insidious variant of data poisoning attacks. Trojan attacks exploit an effective backdoor created in a DNN model by leveraging the difficulty in interpretability of the learned model to misclassify any inputs signed with the attacker's chosen trojan trigger. Since the trojan trigger is a secret guarded and exploited by the attacker, detecting such trojan inputs is a challenge, especially at run-time when models are in active operation. This work builds STRong Intentional Perturbation (STRIP) based run-time trojan attack detection system and focuses on vision system. We intentionally perturb the incoming input, for instance by superimposing various image patterns, and observe the randomness of predicted classes for perturbed inputs from a given deployed model---malicious or benign. A low entropy in predicted classes violates the input-dependence property of a benign model and implies the presence of a malicious input---a characteristic of a trojaned input. The high efficacy of our method is validated through case studies on three popular and contrasting datasets: MNIST, CIFAR10 and GTSRB. We achieve an overall false acceptance rate (FAR) of less than 1%, given a preset false rejection rate (FRR) of 1%, for different types of triggers. Using CIFAR10 and GTSRB, we have empirically achieved result of 0% for both FRR and FAR. We have also evaluated STRIP robustness against a number of trojan attack variants and adaptive attacks.

Yansong Gao, Chang Xu, Derui Wang, Shiping Chen, Damith C.Ranasinghe, Surya Nepal• 2019

Related benchmarks

TaskDatasetResultRank
Backdoor DetectionCIFAR-10--
120
Sentiment ClassificationSST-2 64 instances (test)
Accuracy92.09
80
Backdoor DetectionGTSRB
TPR99.9
39
Backdoor Sample DetectionCIFAR-10 balanced rho=1 (train test)
Badnets TPR98.5
13
Poisoning Defense24 datasets averaged
Poison Accuracy62.68
13
Backdoor DetectionCIFAR-10 imbalanced µ=0.9, ρ=2 (test)
Badnets TPR90.2
13
Backdoor DetectionCIFAR-10 imbalanced µ=0.9, ρ=100 (test)
Badnets TPR49.5
13
Backdoor Sample DetectionCIFAR-10 imbalanced mu=0.9, rho=200 (train test)
Badnets TPR18.6
13
Backdoor Sample DetectionCIFAR-10 imbalanced mu=0.9, rho=10 (train test)
Badnets TPR63.4
13
Backdoor DetectionTiny-ImageNet
TPR84.1
12
Showing 10 of 10 rows

Other info

Follow for update