Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Towards Privacy-Guaranteed Label Unlearning in Vertical Federated Learning: Few-Shot Forgetting without Disclosure

About

This paper addresses the critical challenge of unlearning in Vertical Federated Learning (VFL), a setting that has received far less attention than its horizontal counterpart. Specifically, we propose the first method tailored to \textit{label unlearning} in VFL, where labels play a dual role as both essential inputs and sensitive information. To this end, we employ a representation-level manifold mixup mechanism to generate synthetic embeddings for both unlearned and retained samples. This is to provide richer signals for the subsequent gradient-based label forgetting and recovery steps. These augmented embeddings are then subjected to gradient-based label forgetting, effectively removing the associated label information from the model. To recover performance on the retained data, we introduce a recovery-phase optimization step that refines the remaining embeddings. This design achieves effective label unlearning while maintaining computational efficiency. We validate our method through extensive experiments on diverse datasets, including MNIST, CIFAR-10, CIFAR-100, ModelNet, Brain Tumor MRI, COVID-19 Radiography, and Yahoo Answers demonstrate strong efficacy and scalability. Overall, this work establishes a new direction for unlearning in VFL, showing that re-imagining mixup as an efficient mechanism can unlock practical and utility-preserving unlearning. The code is publicly available at https://github.com/bryanhx/Towards-Privacy-Guaranteed-Label-Unlearning-in-Vertical-Federated-Learning

Hanlin Gu, Hong Xi Tae, Lixin Fan, Chee Seng Chan• 2024

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)--
3518
Image ClassificationCIFAR-10 (test)--
3381
Image ClassificationTiny ImageNet (test)
Accuracy57.84
265
Class UnlearningCIFAR-10 (test)
Test Accuracy67.45
21
Binary ClassificationIncome (test)
Test Accuracy79.36
20
Class UnlearningTiny ImageNet (test)--
19
Class UnlearningCIFAR-100 (test)--
13
Label UnlearningMedMNIST PathMNIST (test)
Test Accuracy81.89
10
Image ClassificationMedMNIST PathMNIST (test)
Accuracy83.85
10
Federated UnlearningIncome
Running Time (s)0.79
5
Showing 10 of 14 rows

Other info

Follow for update