Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Towards Adversarial Evaluations for Inexact Machine Unlearning

About

Machine Learning models face increased concerns regarding the storage of personal user data and adverse impacts of corrupted data like backdoors or systematic bias. Machine Unlearning can address these by allowing post-hoc deletion of affected training data from a learned model. Achieving this task exactly is computationally expensive; consequently, recent works have proposed inexact unlearning algorithms to solve this approximately as well as evaluation methods to test the effectiveness of these algorithms. In this work, we first outline some necessary criteria for evaluation methods and show no existing evaluation satisfies them all. Then, we design a stronger black-box evaluation method called the Interclass Confusion (IC) test which adversarially manipulates data during training to detect the insufficiency of unlearning procedures. We also propose two analytically motivated baseline methods~(EU-k and CF-k) which outperform several popular inexact unlearning methods. Overall, we demonstrate how adversarial evaluation strategies can help in analyzing various unlearning phenomena which can guide the development of stronger unlearning algorithms.

Shashwat Goel, Ameya Prabhu, Amartya Sanyal, Ser-Nam Lim, Philip Torr, Ponnurangam Kumaraguru• 2022

Related benchmarks

TaskDatasetResultRank
Machine UnlearningCIFAR-10 bird, Class 2 (test)
Forgetting Accuracy (Class)27.2
48
Machine UnlearningImageNette gas pump Class 7 (test)
Forget Accuracy0.72
48
Class UnlearningCIFAR-10
Retain Accuracy100
39
Class UnlearningSmall CIFAR-5
Retention Accuracy99.96
13
Machine UnlearningMIMIC-CXR (3% forget set)
MIA1
12
Machine UnlearningMIMIC-CXR (6% forget set)
MIA100
12
Machine UnlearningMIMIC-CXR (10% forget set)
MIA100
12
Image ClassificationCIFAR-10 (test)
Retain Accuracy100
11
Showing 8 of 8 rows

Other info

Follow for update