Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Train with Perturbation, Infer after Merging: A Two-Stage Framework for Continual Learning

About

Continual Learning (CL) aims to enable models to continuously acquire new knowledge from a sequence of tasks with avoiding the forgetting of learned information. However, existing CL methods only rely on the parameters of the most recent task for inference, which makes them susceptible to catastrophic forgetting. Inspired by the recent success of model merging techniques, we propose \textbf{Perturb-and-Merge (P\&M)}, a novel continual learning framework that integrates model merging into the CL paradigm to mitigate forgetting. Specifically, after training on each task, P\&M constructs a new model by forming a convex combination of the previous model and the newly trained task-specific model. Through theoretical analysis, We minimize the total loss increase across all tasks and derive a closed-form solution for the merging coefficient under mild assumptions. To further improve the performance of the merged model, we observe that the degradation introduced during merging can be alleviated by a regularization term composed of the task vector and the Hessian matrix of the loss function. Interestingly, we show that this term can be efficiently approximated using second-order symmetric finite differences, and a stochastic perturbation strategy along the task vector direction is accordingly devised which incurs no additional forward or backward passes while providing an effective approximation of the regularization term. Finally, we combine P\&M with LoRA, a parameter-efficient fine-tuning method, to reduce memory overhead. Our proposed approach achieves state-of-the-art performance on several continual learning benchmark datasets. The code is available at https://github.com/qhmiao/P-M-for-Continual-Learning.

Haomiao Qiu, Miao Zhang, Ziyue Qiao, Liqiang Nie• 2025

Related benchmarks

TaskDatasetResultRank
Class-incremental learningCUB200 10 Tasks
FN (Final Acc)78.29
59
Continual LearningImageNet-R 10 tasks
Average ACC@1085.29
28
Continual LearningCIFAR100 10-task sequential (test)
Accuracy92.89
26
Continual LearningImageNet-R (20 tasks)
Average Accuracy (20 Tasks)82.77
22
Continual LearningImageNetR 5S
Accuracy Last (ALast)81.47
13
Continual LearningDomainNet 10S (10 sessions)
Accuracy Last84.35
12
Continual LearningImageNet-A 10 incremental tasks
Accuracy (A_Last)56.57
12
Continual LearningImageNetA (20 sessions)
ALast Score52.27
11
Showing 8 of 8 rows

Other info

Follow for update