Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection

About

Federated Learning (FL) allows multiple clients to collaboratively train a Neural Network (NN) model on their private data without revealing the data. Recently, several targeted poisoning attacks against FL have been introduced. These attacks inject a backdoor into the resulting model that allows adversary-controlled inputs to be misclassified. Existing countermeasures against backdoor attacks are inefficient and often merely aim to exclude deviating models from the aggregation. However, this approach also removes benign models of clients with deviating data distributions, causing the aggregated model to perform poorly for such clients. To address this problem, we propose DeepSight, a novel model filtering approach for mitigating backdoor attacks. It is based on three novel techniques that allow to characterize the distribution of data used to train model updates and seek to measure fine-grained differences in the internal structure and outputs of NNs. Using these techniques, DeepSight can identify suspicious model updates. We also develop a scheme that can accurately cluster model updates. Combining the results of both components, DeepSight is able to identify and eliminate model clusters containing poisoned models with high attack impact. We also show that the backdoor contributions of possibly undetected poisoned models can be effectively mitigated with existing weight clipping-based defenses. We evaluate the performance and effectiveness of DeepSight and show that it can mitigate state-of-the-art backdoor attacks with a negligible impact on the model's performance on benign data.

Phillip Rieger, Thien Duc Nguyen, Markus Miettinen, Ahmad-Reza Sadeghi• 2022

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy57.94
3518
Image ClassificationTiny ImageNet (test)--
265
Backdoor DefenseCIFAR-10 (test)
Clean Accuracy80.84
40
Image ClassificationCIFAR-10 IID
Average BA0.4864
37
Classification under DBA attackMNIST
Robustness BA28.63
7
Classification under DBA attackF-MNIST
Balance Accuracy24.36
7
Classification under DBA attackEMNIST
Robustness Accuracy (BA)44.16
7
Backdoor DefenseCIFAR-10 alpha=0.2 Neurotoxin attack Round 2001 100 rounds (test)
Mean Accuracy80.35
7
Backdoor DefenseCIFAR-10 alpha=0.5 Neurotoxin attack, Round 2001, 100 rounds (test)
MA81.63
7
Backdoor DefenseCIFAR-10 alpha=0.7 Neurotoxin attack Round 2001 100 rounds (test)
Mean Accuracy82.74
7
Showing 10 of 11 rows

Other info

Follow for update