Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

A Little Is Enough: Circumventing Defenses For Distributed Learning

About

Distributed learning is central for large-scale training of deep-learning models. However, they are exposed to a security threat in which Byzantine participants can interrupt or control the learning process. Previous attack models and their corresponding defenses assume that the rogue participants are (a) omniscient (know the data of all other participants), and (b) introduce large change to the parameters. We show that small but well-crafted changes are sufficient, leading to a novel non-omniscient attack on distributed learning that go undetected by all existing defenses. We demonstrate our attack method works not only for preventing convergence but also for repurposing of the model behavior (backdooring). We show that 20% of corrupt workers are sufficient to degrade a CIFAR10 model accuracy by 50%, as well as to introduce backdoors into MNIST and CIFAR10 models without hurting their accuracy

Moran Baruch, Gilad Baruch, Yoav Goldberg• 2019

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10 IID
Accuracy81
166
Image ClassificationCIFAR10 non-iid
Accuracy63.6
162
Model Poisoning AttackPurchase cross-device (test)
26.14
74
Federated Learning Model Poisoning RobustnessPurchase Cross-silo 100 FL clients, 500 global iterations & 3 layer DNN model
Attack Impact (I_theta)7.65
26
Image ClassificationCIFAR-10 non-IID (test)
Average Test Accuracy48.5
14
Showing 5 of 5 rows

Other info

Follow for update