Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Mixture-of-Subspaces in Low-Rank Adaptation

About

In this paper, we introduce a subspace-inspired Low-Rank Adaptation (LoRA) method, which is computationally efficient, easy to implement, and readily applicable to large language, multimodal, and diffusion models. Initially, we equivalently decompose the weights of LoRA into two subspaces, and find that simply mixing them can enhance performance. To study such a phenomenon, we revisit it through a fine-grained subspace lens, showing that such modification is equivalent to employing a fixed mixer to fuse the subspaces. To be more flexible, we jointly learn the mixer with the original LoRA weights, and term the method Mixture-of-Subspaces LoRA (MoSLoRA). MoSLoRA consistently outperforms LoRA on tasks in different modalities, including commonsense reasoning, visual instruction tuning, and subject-driven text-to-image generation, demonstrating its effectiveness and robustness. Codes are available at https://github.com/wutaiqiang/MoSLoRA.

Taiqiang Wu, Jiahao Wang, Zhe Zhao, Ngai Wong• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy93.53
1460
Visual Question AnsweringTextVQA
Accuracy50.2
1117
Multimodal EvaluationMME
Score64.1
557
Natural Language UnderstandingGLUE
SST-296.17
452
Reading ComprehensionRACE high
Accuracy83.75
295
Multimodal Capability EvaluationMM-Vet
Score35.2
282
Reading ComprehensionRACE mid
Accuracy86.13
196
Diagram Question AnsweringAI2D
AI2D Accuracy66.1
196
Commonsense ReasoningCommonsense Reasoning (BoolQ, PIQA, SIQA, HellaS., WinoG., ARC-e, ARC-c, OBQA) (test)
BoolQ Accuracy74.6
138
ReasoningPIQA
Accuracy85.97
133
Showing 10 of 17 rows

Other info

Follow for update