Towards Unbounded Machine Unlearning
About
Deep machine unlearning is the problem of `removing' from a trained neural network a subset of its training set. This problem is very timely and has many applications, including the key tasks of removing biases (RB), resolving confusion (RC) (caused by mislabelled data in trained models), as well as allowing users to exercise their `right to be forgotten' to protect User Privacy (UP). This paper is the first, to our knowledge, to study unlearning for different applications (RB, RC, UP), with the view that each has its own desiderata, definitions for `forgetting' and associated metrics for forget quality. For UP, we propose a novel adaptation of a strong Membership Inference Attack for unlearning. We also propose SCRUB, a novel unlearning algorithm, which is the only method that is consistently a top performer for forget quality across the different application-dependent metrics for RB, RC, and UP. At the same time, SCRUB is also consistently a top performer on metrics that measure model utility (i.e. accuracy on retained data and generalization), and is more efficient than previous work. The above are substantiated through a comprehensive empirical evaluation against previous state-of-the-art.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Machine Unlearning | ImageNette gas pump Class 7 (test) | Forget Accuracy95.87 | 48 | |
| Machine Unlearning | CIFAR-10 bird, Class 2 (test) | Forgetting Accuracy (Class)100 | 48 | |
| Machine Unlearning | CIFAR-100 (test) | Forget Acc0.6364 | 43 | |
| Class Unlearning | CIFAR-10 | Retain Accuracy99.93 | 39 | |
| Selective Unlearning | Lacuna 10 (test) | Test Error (mean)1.67 | 36 | |
| Resolving Confusion | CIFAR-10 | Test Error15.92 | 28 | |
| Single-class Unlearning | CIFAR-100 | ACCr76.92 | 28 | |
| Single-class Unlearning | MNIST | Accuracy Retention (ACCr)0.9945 | 28 | |
| Resolving Confusion | Lacuna-5 (test) | Test Error3.87 | 27 | |
| Class Unlearning | Lacuna-10 | Test Error1.96 | 27 |