Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Mitigating Sybils in Federated Learning Poisoning

About

Machine learning (ML) over distributed multi-party data is required for a variety of domains. Existing approaches, such as federated learning, collect the outputs computed by a group of devices at a central aggregator and run iterative algorithms to train a globally shared model. Unfortunately, such approaches are susceptible to a variety of attacks, including model poisoning, which is made substantially worse in the presence of sybils. In this paper we first evaluate the vulnerability of federated learning to sybil-based poisoning attacks. We then describe \emph{FoolsGold}, a novel defense to this problem that identifies poisoning sybils based on the diversity of client updates in the distributed learning process. Unlike prior work, our system does not bound the expected number of attackers, requires no auxiliary information outside of the learning process, and makes fewer assumptions about clients and their data. In our evaluation we show that FoolsGold exceeds the capabilities of existing state of the art approaches to countering sybil-based label-flipping and backdoor poisoning attacks. Our results hold for different distributions of client data, varying poisoning targets, and various sybil strategies. Code can be found at: https://github.com/DistributedML/FoolsGold

Clement Fung, Chris J.M. Yoon, Ivan Beschastnikh• 2018

Related benchmarks

TaskDatasetResultRank
Federated Learning Model Poisoning RobustnessPurchase Cross-silo 100 FL clients, 500 global iterations & 3 layer DNN model--
26
Safety and Utility EvaluationBeaverTails & WildChat
Rule Adherence51.54
11
Human Activity RecognitionHAR 30 FL clients, 1000 global iterations & Logistic Regression model (Cross-silo)
A Theta Performance97.21
10
Federated Learning RobustnessFashion MNIST
A_theta Robustness Score84.45
10
Image ClassificationCIFAR-10 cross-silo (test)
85.21
10
Federated LearningEMNIST Cross-device
Aθ Score74.51
10
Image ClassificationMNIST Cross-silo
Classification Accuracy94.8
10
Model Poisoning Attack ImpactEMNIST Cross-silo
74.39
10
Safety EvaluationBeaverTails & LMSYS-Chat (test)
Rule Score72.12
8
Robust Safety and Utility Evaluation in Federated LearningBeaverTails & LMSYS-Chat
Rule Score53.27
8
Showing 10 of 16 rows

Other info

Follow for update