Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Jellyfish: Zero-Shot Federated Unlearning Scheme with Knowledge Disentanglement

About

With the increasing importance of data privacy and security, federated unlearning emerges as a new research field dedicated to ensuring that once specific data is deleted, federated learning models no longer retain or disclose related information. In this paper, we propose a zero-shot federated unlearning scheme, named Jellyfish. It distinguishes itself from conventional federated unlearning frameworks in four key aspects: synthetic data generation, knowledge disentanglement, loss function design, and model repair. To preserve the privacy of forgotten data, we design a zero-shot unlearning mechanism that generates error-minimization noise as proxy data for the data to be forgotten. To maintain model utility, we first propose a knowledge disentanglement mechanism that regularises the output of the final convolutional layer by restricting the number of activated channels for the data to be forgotten and encouraging activation sparsity. Next, we construct a comprehensive loss function that incorporates multiple components, including hard loss, confusion loss, distillation loss, model weight drift loss, gradient harmonization, and gradient masking, to effectively align the learning trajectories of the objectives of ``forgetting" and ``retaining". Finally, we propose a zero-shot repair mechanism that leverages proxy data to restore model accuracy within acceptable bounds without accessing users' local data. To evaluate the performance of the proposed zero-shot federated unlearning scheme, we conducted comprehensive experiments across diverse settings. The results validate the effectiveness and robustness of the scheme.

Houzhe Wang, Xiaojie Zhu, Chi Chen• 2026

Related benchmarks

TaskDatasetResultRank
Membership Inference AttackCIFAR-10--
107
Class UnlearningCIFAR-10 (test)
Df0.15
35
Class UnlearningCIFAR-100 (test)
Acc (Dr)78.45
22
Category UnlearningMNIST (test)
Accuracy (Dr)97.85
22
Membership Inference AttackCIFAR-100--
18
Machine UnlearningCIFAR-100
Unlearning Time (min)1.4222
5
Machine UnlearningCIFAR-10
Average Time per Epoch (s)4.49
3
Membership Inference AttackCIFAR-100 (Dr)
MIA Success Rate89.61
3
Machine UnlearningMNIST
Average Time (s/epoch)3.25
3
Membership Inference AttackMNIST (Dr)
MIA Success Rate79.4
3
Showing 10 of 11 rows

Other info

Follow for update