Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Controlled Low-Rank Adaptation with Subspace Regularization for Continued Training on Large Language Models

About

Large language models (LLMs) exhibit remarkable capabilities in natural language processing but face catastrophic forgetting when learning new tasks, where adaptation to a new domain leads to a substantial decline in performance on previous tasks. In this paper, we propose Controlled LoRA (CLoRA), a sub-space regularization method on LoRA structure. Aiming to reduce the scale of output change while introduce minimal constraint on model capacity, CLoRA imposes constraint on the direction of updating matrix's null space. Experimental results on one-stage LLM finetuning tasks and continual learning settings highlight the superority of CLoRA as a effective parameter efficient finetuning method with catastrophic forgetting mitigating.Further investigation for model parameters indicates that CLoRA effectively balances the trade-off between model capacity and degree of forgetting.

Yuheng Lu, Bingshuo Qian, Caixia Yuan, Huixing Jiang, Xiaojie Wang• 2024

Related benchmarks

TaskDatasetResultRank
Multi-task Language UnderstandingMMLU
Accuracy20.59
842
Mathematical ReasoningGSM8K (test)--
751
ReasoningBBH
Accuracy38.67
507
Mathematical ReasoningMATH (test)
Overall Accuracy18.38
433
Commonsense ReasoningCommon Sense Reasoning Tasks
Avg Score83.7
241
Continual LearningStandard CL Benchmark
BWT (Avg Order 1-3)79
38
Continual LearningLarge Number of Tasks Benchmark (test)
Performance (Order 1)70.7
12
Showing 7 of 7 rows

Other info

Follow for update