Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks

About

Deep neural networks (DNNs) provide excellent performance across a wide range of classification tasks, but their training requires high computational resources and is often outsourced to third parties. Recent work has shown that outsourced training introduces the risk that a malicious trainer will return a backdoored DNN that behaves normally on most inputs but causes targeted misclassifications or degrades the accuracy of the network when a trigger known only to the attacker is present. In this paper, we provide the first effective defenses against backdoor attacks on DNNs. We implement three backdoor attacks from prior work and use them to investigate two promising defenses, pruning and fine-tuning. We show that neither, by itself, is sufficient to defend against sophisticated attackers. We then evaluate fine-pruning, a combination of pruning and fine-tuning, and show that it successfully weakens or even eliminates the backdoors, i.e., in some cases reducing the attack success rate to 0% with only a 0.4% drop in accuracy for clean (non-triggering) inputs. Our work provides the first step toward defenses against backdoor attacks in deep neural networks.

Kang Liu, Brendan Dolan-Gavitt, Siddharth Garg• 2018

Related benchmarks

TaskDatasetResultRank
Backdoor DefenseCIFAR10 (test)
ASR0.00e+0
322
Text ClassificationSST-2
Accuracy96.32
129
Backdoor DefenseGTSRB (test)
ASR2.23
127
Backdoor DefenseTiny-ImageNet
Accuracy52.38
102
Backdoor DefenseAGNews
Attack Success Rate7.07
81
Sentiment ClassificationSST-2 64 instances (test)
Accuracy92.2
80
Image ClassificationMNIST
Clean Accuracy97
71
Backdoor DefenseAverage of four datasets (test)
Accuracy87.5
70
Backdoor DefenseCIFAR10 (train)
ASR2.45
63
Image ClassificationCINIC-10
Accuracy68
59
Showing 10 of 55 rows

Other info

Follow for update