Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Continual Gradient Low-Rank Projection Fine-Tuning for LLMs

About

Continual fine-tuning of Large Language Models (LLMs) is hampered by the trade-off between efficiency and expressiveness. Low-Rank Adaptation (LoRA) offers efficiency but constrains the model's ability to learn new tasks and transfer knowledge due to its low-rank nature and reliance on explicit parameter constraints. We propose GORP (Gradient LOw Rank Projection) for Continual Learning, a novel training strategy that overcomes these limitations by synergistically combining full and low-rank parameters and jointly updating within a unified low-rank gradient subspace. GORP expands the optimization space while preserving efficiency and mitigating catastrophic forgetting. Extensive experiments on continual learning benchmarks demonstrate GORP's superior performance compared to existing state-of-the-art approaches. Code is available at https://github.com/Wcxwcxw/GORP.

Chenxu Wang, Yilin Lyu, Zicheng Sun, Liping Jing• 2025

Related benchmarks

TaskDatasetResultRank
Continual LearningLarge Number of Tasks
Average Performance76
50
Continual LearningStandard CL Benchmark
BWT (Avg Order 1-3)79.8
38
Continual LearningTRACE
Avg Performance50.4
37
Continual LearningStandard CL Benchmark
FLOPs0.125
3
Continual LearningStandard CL Benchmark Order-1
Accuracy78.7
3
Continual LearningStandard CL Benchmark Order-2
Accuracy78.8
3
Continual LearningStandard CL Benchmark Average
Accuracy78.6
3
Continual LearningStandard CL Benchmark Order-3
Accuracy78.2
3
Showing 8 of 8 rows

Other info

Code

Follow for update