Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

IMU: Influence-guided Machine Unlearning

About

Machine Unlearning (MU) aims to selectively erase the influence of specific data points from pretrained models. However, most existing MU methods rely on the retain set to preserve model utility, which is often impractical due to privacy restrictions and storage constraints. While several retain-data-free methods attempt to bypass this using geometric feature shifts or auxiliary statistics, they typically treat forgetting samples uniformly, overlooking their heterogeneous contributions. To address this, we propose \ul{I}nfluence-guided \ul{M}achine \ul{U}nlearning (IMU), a principled method that conducts MU using only the forget set. Departing from uniform Gradient Ascent (GA) or implicit weighting mechanisms, IMU leverages influence functions as an explicit priority signal to allocate unlearning strength. To circumvent the prohibitive cost of full-model Hessian inversion, we introduce a theoretically grounded classifier-level influence approximation. This efficient design allows IMU to dynamically reweight unlearning updates, aggressively targeting samples that most strongly support the forgetting objective while minimizing unnecessary perturbation to retained knowledge. Extensive experiments across vision and language tasks show that IMU achieves highly competitive results. Compared to standard uniform GA, IMU maintains identical unlearning depth while enhancing model utility by an average of 30%, effectively overcoming the inherent utility-forgetting trade-off.

Xindi Fan, Jing Wu, Mingyi Zhou, Pengwei Liang, Mehrtash Harandi, Dinh Phung• 2025

Related benchmarks

TaskDatasetResultRank
Sample-wise unlearningCIFAR-10 10% sample-wise unlearning
AccDf98.64
9
Sample-wise unlearningCIFAR-100 10% sample-wise unlearning
Accuracy (Deleted Samples)93.55
9
Machine UnlearningCIFAR-100 superclass-wise
Accuracy (Df)0.00e+0
9
Machine UnlearningCIFAR-100 subclass-wise
Accuracy (Deleted Data)0.00e+0
9
Machine UnlearningCIFAR-10 50% sample-wise unlearning
Accuracy (Df)72.94
9
Person Re-IdentificationMarket-1501 (query-gallery)
mAP0.5585
8
Machine UnlearningCIFAR-10 class-wise
Accuracy Difference (Df)0.18
8
Sequence ModelingTOFU (Forget05)
Reconstruction Loss (l_r)3.86
5
LLM UnlearningTOFU (Forget05)
Model Utility0.33
4
Showing 9 of 9 rows

Other info

Follow for update