Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Protecting Federated Learning from Extreme Model Poisoning Attacks via Multidimensional Time Series Anomaly Detection

About

Current defense mechanisms against model poisoning attacks in federated learning (FL) systems have proven effective up to a certain threshold of malicious clients. In this work, we introduce FLANDERS, a novel pre-aggregation filter for FL resilient to large-scale model poisoning attacks, i.e., when malicious clients far exceed legitimate participants. FLANDERS treats the sequence of local models sent by clients in each FL round as a matrix-valued time series. Then, it identifies malicious client updates as outliers in this time series by comparing actual observations with estimates generated by a matrix autoregressive forecasting model maintained by the server. Experiments conducted in several non-iid FL setups show that FLANDERS significantly improves robustness across a wide spectrum of attacks when paired with standard and robust existing aggregation methods.

Edoardo Gabrielli, Dimitri Belli, Zoe Matrullo, Vittorio Miori, Gabriele Tolomei• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10
Accuracy36
564
Image ClassificationCIFAR-100
Accuracy9
435
Image ClassificationFashion MNIST
Accuracy69
300
Image ClassificationFashionMNIST (test)
Accuracy72
260
Image ClassificationMNIST (test)
Accuracy90
196
Malicious Client DetectionFashion MNIST--
24
Malicious Client DetectionMNIST
Precision100
16
Malicious Client DetectionCIFAR-10
Precision100
16
Malicious Client DetectionCIFAR-100
Precision100
16
Image ClassificationMNIST
Accuracy86
14
Showing 10 of 18 rows

Other info

Follow for update