Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

NoT: Federated Unlearning via Weight Negation

About

Federated unlearning (FU) aims to remove a participant's data contributions from a trained federated learning (FL) model, ensuring privacy and regulatory compliance. Traditional FU methods often depend on auxiliary storage on either the client or server side or require direct access to the data targeted for removal-a dependency that may not be feasible if the data is no longer available. To overcome these limitations, we propose NoT, a novel and efficient FU algorithm based on weight negation (multiplying by -1), which circumvents the need for additional storage and access to the target data. We argue that effective and efficient unlearning can be achieved by perturbing model parameters away from the set of optimal parameters, yet being well-positioned for quick re-optimization. This technique, though seemingly contradictory, is theoretically grounded: we prove that the weight negation perturbation effectively disrupts inter-layer co-adaptation, inducing unlearning while preserving an approximate optimality property, thereby enabling rapid recovery. Experimental results across three datasets and three model architectures demonstrate that NoT significantly outperforms existing baselines in unlearning efficacy as well as in communication and computational efficiency.

Yasser H. Khalil, Leo Brunswic, Soufiane Lamghari, Xu Li, Mahdi Beitollahi, Xi Chen• 2025

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10 (test)
Accuracy92.18
3381
Image ClassificationFashionMNIST (test)--
218
Image ClassificationCaltech-101
Accuracy15.62
198
Federated UnlearningCIFAR-10 (test)
Pretrain Accuracy80
20
Federated UnlearningFashion MNIST (test)
Pre-training Accuracy99.2
20
Federated UnlearningMNIST (test)
Pretrain Accuracy99.13
20
Federated UnlearningCIFAR-10
Pretrain Accuracy68.7
20
Federated UnlearningFashion MNIST
Pre-training Accuracy65.93
20
Image ClassificationCIFAR-10 (test)
Pretrain Accuracy60.44
20
Image ClassificationMNIST (test)
Pretrain Accuracy77.48
15
Showing 10 of 40 rows

Other info

Follow for update