Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Byzantine-Robust Learning on Heterogeneous Datasets via Bucketing

About

In Byzantine robust distributed or federated learning, a central server wants to train a machine learning model over data distributed across multiple workers. However, a fraction of these workers may deviate from the prescribed algorithm and send arbitrary messages. While this problem has received significant attention recently, most current defenses assume that the workers have identical data. For realistic cases when the data across workers are heterogeneous (non-iid), we design new attacks which circumvent current defenses, leading to significant loss of performance. We then propose a simple bucketing scheme that adapts existing robust algorithms to heterogeneous datasets at a negligible computational cost. We also theoretically and experimentally validate our approach, showing that combining bucketing with existing robust algorithms is effective against challenging attacks. Our work is the first to establish guaranteed convergence for the non-iid Byzantine robust problem under realistic assumptions.

Sai Praneeth Karimireddy, Lie He, Martin Jaggi• 2020

Related benchmarks

TaskDatasetResultRank
Federated Time Series ForecastingFHWA
MSE0.0067
45
Federated Time Series ForecastingPDCCH
MSE0.0063
45
Model Poisoning DefensePDCCH
MSE0.0065
36
Model Poisoning DefenseFHWA
MSE0.2207
36
Byzantine Attack DefensePDCCH (Full)
MSE0.0056
9
Byzantine Attack DefensePDCCH Partial
MSE0.0071
9
Byzantine Attack DefenseFHWA Full
MSE0.0252
9
Byzantine Attack DefenseFHWA (Partial)
MSE0.0487
9
Distributed OptimizationOptimization under Byzantine and/or DP adversaries
Utility2
6
Showing 9 of 9 rows

Other info

Follow for update