Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Exploring the Impact of Parameter Update Magnitude on Forgetting and Generalization of Continual Learning

About

The magnitude of parameter updates are considered a key factor in continual learning. However, most existing studies focus on designing diverse update strategies, while a theoretical understanding of the underlying mechanisms remains limited. Therefore, we characterize model's forgetting from the perspective of parameter update magnitude and formalize it as knowledge degradation induced by task-specific drift in the parameter space, which has not been fully captured in previous studies due to their assumption of a unified parameter space. By deriving the optimal parameter update magnitude that minimizes forgetting, we unify two representative update paradigms, frozen training and initialized training, within an optimization framework for constrained parameter updates. Our theoretical results further reveals that sequence tasks with small parameter distances exhibit better generalization and less forgetting under frozen training rather than initialized training. These theoretical insights inspire a novel hybrid parameter update strategy that adaptively adjusts update magnitude based on gradient directions. Experiments on deep neural networks demonstrate that this hybrid approach outperforms standard training strategies, providing new theoretical perspectives and practical inspiration for designing efficient and scalable continual learning algorithms.

JinLi He, Liang Bai, Xian Yang• 2026

Related benchmarks

TaskDatasetResultRank
Continual LearningCIFAR100 Split
Average Per-Task Accuracy47.19
85
Continual LearningSplit CIFAR-100 20 tasks
Mean Test Accuracy80.81
26
Continual LearningSplit CUB-200 20 Tasks
Task Accuracy72.59
9
Continual LearningCUB-200 Split
Final Avg Accuracy39.01
8
Continual LearningCIFAR-10 Split
Average Accuracy80.55
6
Continual LearningCIFAR-100 Correlated Split
Avg. Accuracy50.21
6
Continual LearningCUB-200 Corrupted Split
Avg Accuracy41.26
6
Continual LearningMNIST Split Permuted
Avg. Accuracy77.99
6
Showing 8 of 8 rows

Other info

Follow for update