Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning Attacks in Federated Learning

About

Federated learning (FL) is a collaborative learning paradigm allowing multiple clients to jointly train a model without sharing their training data. However, FL is susceptible to poisoning attacks, in which the adversary injects manipulated model updates into the federated model aggregation process to corrupt or destroy predictions (untargeted poisoning) or implant hidden functionalities (targeted poisoning or backdoors). Existing defenses against poisoning attacks in FL have several limitations, such as relying on specific assumptions about attack types and strategies or data distributions or not sufficiently robust against advanced injection techniques and strategies and simultaneously maintaining the utility of the aggregated model. To address the deficiencies of existing defenses, we take a generic and completely different approach to detect poisoning (targeted and untargeted) attacks. We present FreqFed, a novel aggregation mechanism that transforms the model updates (i.e., weights) into the frequency domain, where we can identify the core frequency components that inherit sufficient information about weights. This allows us to effectively filter out malicious updates during local training on the clients, regardless of attack types, strategies, and clients' data distributions. We extensively evaluate the efficiency and effectiveness of FreqFed in different application domains, including image classification, word prediction, IoT intrusion detection, and speech recognition. We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.

Hossein Fereidooni, Alessandro Pegoraro, Phillip Rieger, Alexandra Dmitrienko, Ahmad-Reza Sadeghi• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10
Accuracy71
508
Image ClassificationF-MNIST
Accuracy89.9
109
Sentiment AnalysisSent140
Accuracy78
79
Backdoor AttackFMNIST
ASR47
75
Question AnsweringNQ
ASR99.45
70
Backdoor Attack Success RateMNIST
Backdoor Attack Success Rate31.7
60
Backdoor Attack Success RateSentiment-140
Backdoor Attack Success Rate15.3
60
Backdoor Attack Success RateCIFAR-10
Backdoor Attack Success Rate13
60
Question AnsweringCoQA
CACC75.3
40
Question AnsweringWebQA
CACC45.77
40
Showing 10 of 18 rows

Other info

Follow for update