Efficient Unlearning through Maximizing Relearning Convergence Delay
About
Machine unlearning poses challenges in removing mislabeled, contaminated, or problematic data from a pretrained model. Current unlearning approaches and evaluation metrics are solely focused on model predictions, which limits insight into the model's true underlying data characteristics. To address this issue, we introduce a new metric called relearning convergence delay, which captures both changes in weight space and prediction space, providing a more comprehensive assessment of the model's understanding of the forgotten dataset. This metric can be used to assess the risk of forgotten data being recovered from the unlearned model. Based on this, we propose the Influence Eliminating Unlearning framework, which removes the influence of the forgetting set by degrading its performance and incorporates weight decay and injecting noise into the model's weights, while maintaining accuracy on the retaining set. Extensive experiments show that our method outperforms existing metrics and our proposed relearning convergence delay metric, approaching ideal unlearning performance. We provide theoretical guarantees, including exponential convergence and upper bounds, as well as empirical evidence of strong retention and resistance to relearning in both classification and generative unlearning tasks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Machine Unlearning | Tiny-ImageNet Forget 50% | RMIA AUC95.5 | 26 | |
| Machine Unlearning | CIFAR-10 30% random data forgetting | Average Gap0.003 | 24 | |
| Machine Unlearning | CIFAR-100 Random Forget 50% | MIA72.3 | 19 | |
| Machine Unlearning | Tiny-Imagenet Random Forget 50%, γ=0 (test) | MIA95 | 19 | |
| Image Classification Unlearning | Tiny ImageNet 30% class-wise forgetting | Accuracy (Train Retained)89 | 16 | |
| Machine Unlearning | CIFAR-100 30% class-wise data forgetting (train/test) | Utility (Accuracy, Train, Retained Data)99.8 | 16 | |
| Machine Unlearning | CIFAR-100 50% class-wise data forgetting (train test) | Accuracy Dr (Train)99.9 | 16 | |
| Image Classification | TinyImageNet Remaining D_train_r (train) | Accuracy92.4 | 16 | |
| Image Classification | TinyImageNet Forgotten (train) | Accuracy54.7 | 16 | |
| Image Classification | TinyImageNet D (test) | Accuracy54.1 | 16 |