Designing to Forget: Deep Semi-parametric Models for Unlearning
About
Recent advances in machine unlearning have focused on developing algorithms to remove specific training samples from a trained model. In contrast, we observe that not all models are equally easy to unlearn. Hence, we introduce a family of deep semi-parametric models (SPMs) that exhibit non-parametric behavior during unlearning. SPMs use a fusion module that aggregates information from each training sample, enabling explicit test-time deletion of selected samples without altering model parameters. Empirically, we demonstrate that SPMs achieve competitive task performance to parametric models in image classification and generation, while being significantly more efficient for unlearning. Notably, on ImageNet classification, SPMs reduce the prediction gap relative to a retrained (oracle) baseline by $11\%$ and achieve over $10\times$ faster unlearning compared to existing approaches on parametric models. The code is available at https://github.com/amberyzheng/spm_unlearning.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Class-wise Unlearning | CIFAR-10 Unlearn 1 Class v1 (10% unlearned) | PG H8.62 | 13 | |
| Class-wise Unlearning | CIFAR-10 Unlearn 5 Classes v1 (50% unlearned) | Privacy Guarantee (H)21.2 | 13 | |
| Machine Unlearning | ImageNet 1-class unlearning 1K | PGH18.99 | 13 | |
| Machine Unlearning | CIFAR-10 10% random unlearning | PGH5.54 | 12 | |
| Machine Unlearning | CIFAR-10 50% random unlearning | PGH7.83 | 12 | |
| Image Classification | ImageNet 1k (test) | Accuracy67.1 | 10 | |
| Image Classification | CIFAR10 (test) | Accuracy94.5 | 10 | |
| Image Generation | CIFAR-10 | FID7.04 | 7 | |
| Class-wise Forgetting on Generation | CIFAR-10 1.0 (test) | Automobile FID (O)1.97 | 6 | |
| Class Unlearning | CIFAR-10 Automobile | -- | 5 |