Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Resolving Conflicts in Lifelong Learning via Aligning Updates in Subspaces

About

Low-Rank Adaptation (LoRA) enables efficient Continual Learning but often suffers from catastrophic forgetting due to destructive interference between tasks. Our analysis reveals that this degradation is primarily driven by antagonistic directional updates where new task gradients directly oppose the historical weight trajectory. To address this, we propose PS-LoRA (Parameter Stability LoRA), a framework designed to resolve conflicts by aligning updates within the optimization subspace. Our approach employs a dual-regularization objective that penalizes conflicting directions and constrains magnitude deviations to ensure consistency with prior knowledge. Additionally, we implement a magnitude-based merging strategy to consolidate sequential adapters into a robust representation without retraining. Experiments on NLP and Vision benchmarks show that PS-LoRA outperforms state-of-the-art methods by preserving the stability of learned representations while efficiently adapting to new domains.

Yueer Zhou, Yichen Wu, Ying Wei• 2025

Related benchmarks

TaskDatasetResultRank
Continual LearningStandard CL Benchmark
Avg Final Acc0.808
50
Continual LearningTRACE
Avg Performance54.95
37
Class-incremental learningImageNet-R 20-task
Average Accuracy75.35
33
Class-incremental learningImageNet-R 5-task--
27
Continual LearningLong CL benchmark N=15
Long1 Performance76.7
18
Class-incremental learningImageNet-R N = 10
Accuracy77.15
6
Continual LearningStandard Continual Learning Benchmark N=4
Forward Retention (FR)1.99
6
Continual LearningLong Continual Learning Benchmark N=15
Forward Retention6.32
6
Showing 8 of 8 rows

Other info

Follow for update