Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA-based Mixture of Experts

About

Fine-tuning Large Language Models (LLMs) is a common practice to adapt pre-trained models for specific applications. While methods like LoRA have effectively addressed GPU memory constraints during fine-tuning, their performance often falls short, especially in multi-task scenarios. In contrast, Mixture-of-Expert (MoE) models, such as Mixtral 8x7B, demonstrate remarkable performance in multi-task learning scenarios while maintaining a reduced parameter count. However, the resource requirements of these MoEs remain challenging, particularly for consumer-grade GPUs with less than 24GB memory. To tackle these challenges, we propose MixLoRA, an approach to construct a resource-efficient sparse MoE model based on LoRA. MixLoRA inserts multiple LoRA-based experts within the feed-forward network block of a frozen pre-trained dense model and employs a commonly used top-k router. Unlike other LoRA-based MoE methods, MixLoRA enhances model performance by utilizing independent attention-layer LoRA adapters. Additionally, an auxiliary load balance loss is employed to address the imbalance problem of the router. Our evaluations show that MixLoRA improves about 9% accuracy compared to state-of-the-art PEFT methods in multi-task learning scenarios. We also propose a new high-throughput framework to alleviate the computation and memory bottlenecks during the training and inference of MOE models. This framework reduces GPU memory consumption by 40% and token computation latency by 30% during both training and inference.

Dengchun Li, Yingzi Ma, Naizheng Wang, Zhengmao Ye, Zhiyuan Cheng, Yinghao Tang, Yan Zhang, Lei Duan, Jie Zuo, Cal Yang, Mingjie Tang• 2024

Related benchmarks

TaskDatasetResultRank
Question AnsweringARC Challenge
Accuracy56.3
749
Question AnsweringOpenBookQA
Accuracy86.9
465
Question AnsweringARC Easy
Normalized Acc77
385
Physical Interaction Question AnsweringPIQA
Accuracy87.6
323
Boolean Question AnsweringBoolQ
Accuracy67.2
307
Question AnsweringOBQA
Accuracy75.8
276
Question AnsweringARC-E
Accuracy87.7
242
Commonsense ReasoningCommon Sense Reasoning Tasks
Avg Score84.1
241
Question AnsweringARC-C
Accuracy79.9
166
Common Sense ReasoningWinoGrande
Accuracy86.5
156
Showing 10 of 18 rows

Other info

Code

Follow for update