Resolving Conflicts in Lifelong Learning via Aligning Updates in Subspaces
About
Low-Rank Adaptation (LoRA) enables efficient Continual Learning but often suffers from catastrophic forgetting due to destructive interference between tasks. Our analysis reveals that this degradation is primarily driven by antagonistic directional updates where new task gradients directly oppose the historical weight trajectory. To address this, we propose PS-LoRA (Parameter Stability LoRA), a framework designed to resolve conflicts by aligning updates within the optimization subspace. Our approach employs a dual-regularization objective that penalizes conflicting directions and constrains magnitude deviations to ensure consistency with prior knowledge. Additionally, we implement a magnitude-based merging strategy to consolidate sequential adapters into a robust representation without retraining. Experiments on NLP and Vision benchmarks show that PS-LoRA outperforms state-of-the-art methods by preserving the stability of learned representations while efficiently adapting to new domains.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Continual Learning | Standard CL Benchmark | Avg Final Acc0.808 | 50 | |
| Continual Learning | TRACE | Avg Performance54.95 | 37 | |
| Class-incremental learning | ImageNet-R 20-task | Average Accuracy75.35 | 33 | |
| Class-incremental learning | ImageNet-R 5-task | -- | 27 | |
| Continual Learning | Long CL benchmark N=15 | Long1 Performance76.7 | 18 | |
| Class-incremental learning | ImageNet-R N = 10 | Accuracy77.15 | 6 | |
| Continual Learning | Standard Continual Learning Benchmark N=4 | Forward Retention (FR)1.99 | 6 | |
| Continual Learning | Long Continual Learning Benchmark N=15 | Forward Retention6.32 | 6 |