Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Orthogonal Subspace Learning for Language Model Continual Learning

About

Benefiting from massive corpora and advanced hardware, large language models (LLMs) exhibit remarkable capabilities in language understanding and generation. However, their performance degrades in scenarios where multiple tasks are encountered sequentially, also known as catastrophic forgetting. In this paper, we propose orthogonal low-rank adaptation (O-LoRA), a simple and efficient approach for continual learning in language models, effectively mitigating catastrophic forgetting while learning new tasks. Specifically, O-LoRA learns tasks in different (low-rank) vector subspaces that are kept orthogonal to each other in order to minimize interference. Our method induces only marginal additional parameter costs and requires no user data storage for replay. Experimental results on continual learning benchmarks show that our method outperforms state-of-the-art methods. Furthermore, compared to previous approaches, our method excels in preserving the generalization ability of LLMs on unseen tasks.

Xiao Wang, Tianze Chen, Qiming Ge, Han Xia, Rong Bao, Rui Zheng, Qi Zhang, Tao Gui, Xuanjing Huang• 2023

Related benchmarks

TaskDatasetResultRank
Continual LearningLarge Number of Tasks
Average Performance73.5
50
Continual LearningStandard CL Benchmark
Avg Final Acc0.772
50
Continual LearningStandard CL Benchmark
BWT (Avg Order 1-3)75.8
38
Continual LearningTRACE
Avg Performance52.02
37
Continual LearningLong CL benchmark N=15
Long1 Performance71.5
18
Visual Question AnsweringUCIT
ArxivQA80.93
16
Visual Question AnsweringMLLM-DCL
Accuracy (Medical)46.89
16
Continual LearningLong Sequence (test)
AP69.24
15
Continual LearningSuperNI Benchmark
Average Score25.9
14
Continual LearningLong Sequence Benchmark
OP69.6
14
Showing 10 of 36 rows

Other info

Follow for update