Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Contrastive Unlearning: A Contrastive Approach to Machine Unlearning

About

Machine unlearning aims to eliminate the influence of a subset of training samples (i.e., unlearning samples) from a trained model. Effectively and efficiently removing the unlearning samples without negatively impacting the overall model performance is still challenging. In this paper, we propose a contrastive unlearning framework, leveraging the concept of representation learning for more effective unlearning. It removes the influence of unlearning samples by contrasting their embeddings against the remaining samples so that they are pushed away from their original classes and pulled toward other classes. By directly optimizing the representation space, it effectively removes the influence of unlearning samples while maintaining the representations learned from the remaining samples. Experiments on a variety of datasets and models on both class unlearning and sample unlearning showed that contrastive unlearning achieves the best unlearning effects and efficiency with the lowest performance loss compared with the state-of-the-art algorithms.

Hong kyu Lee, Qiuchen Zhang, Carl Yang, Jian Lou, Li Xiong• 2024

Related benchmarks

TaskDatasetResultRank
Face RetrievalCFP-FP
mAP91
11
Face RetrievalCelebA D_f (test)
mAP88.57
8
Face RetrievalVggFace2
mAP89
8
Face RetrievalCelebA extended (test)
mAP88.23
8
Face RetrievalCFP-FP (test)
mAP0.7003
8
Face UnlearningCelebA forget set (test)
Accuracy98.17
8
Face UnlearningCelebA D_r retain set (test)
Accuracy96.6
8
Showing 7 of 7 rows

Other info

Follow for update