Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Practical Continual Forgetting for Pre-trained Vision Models

About

For privacy and security concerns, the need to erase unwanted information from pre-trained vision models is becoming evident nowadays. In real-world scenarios, erasure requests originate at any time from both users and model owners, and these requests usually form a sequence. Therefore, under such a setting, selective information is expected to be continuously removed from a pre-trained model while maintaining the rest. We define this problem as continual forgetting and identify three key challenges. (i) For unwanted knowledge, efficient and effective deleting is crucial. (ii) For remaining knowledge, the impact brought by the forgetting procedure should be minimal. (iii) In real-world scenarios, the training samples may be scarce or partially missing during the process of forgetting. To address them, we first propose Group Sparse LoRA (GS-LoRA). Specifically, towards (i), we introduce Low-Rank Adaptation (LoRA) modules to fine-tune the Feed-Forward Network (FFN) layers in Transformer blocks for each forgetting task independently, and towards (ii), a simple group sparse regularization is adopted, enabling automatic selection of specific LoRA groups and zeroing out the others. To further extend GS-LoRA to more practical scenarios, we incorporate prototype information as additional supervision and introduce a more practical approach, GS-LoRA++. For each forgotten class, we move the logits away from its original prototype. For the remaining classes, we pull the logits closer to their respective prototypes. We conduct extensive experiments on face recognition, object detection, and image classification and demonstrate that our method manages to forget specific classes with minimal impact on other classes. Codes have been released on https://github.com/bjzhb666/GS-LoRA.

Hongbo Zhao, Fei Zhu, Bolin Ni, Feng Zhu, Gaofeng Meng, Zhaoxiang Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Image ClassificationCUB-200 Task 2 80-20 classes
Accuracy (Overall)68.79
16
Image ClassificationCUB-200 Task 1 100-20 classes
Accuracy66.99
16
Image ClassificationCUB-200 Task 3 60-20 classes
Accuracy (Accr)68.76
16
Image ClassificationCUB-200 Task 4 40-20 classes
Accr70.23
16
Image ClassificationOmniBenchmark Task 1, 100-20
Accuracy (r)58.89
16
Image ClassificationImageNet100 Task 1(100-20) 1.0 (test)
Accuracy85.98
16
Face RecognitionMS-Celeb-100 Task 4: 40-20 classes
Acc_r99.29
16
Image ClassificationImageNet100 Task 4(40-20) 1.0 (test)
Accuracy87.5
16
Face RecognitionMS-Celeb-100 Task 2 80-20 classes
Acc_r99.45
16
Image ClassificationOmniBenchmark Task 2 80-20
Accuracy (r)55.53
16
Showing 10 of 20 rows

Other info

Follow for update