Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DoRA: Enhancing Parameter-Efficient Fine-Tuning with Dynamic Rank Distribution

About

Fine-tuning large-scale pre-trained models is inherently a resource-intensive task. While it can enhance the capabilities of the model, it also incurs substantial computational costs, posing challenges to the practical application of downstream tasks. Existing parameter-efficient fine-tuning (PEFT) methods such as Low-Rank Adaptation (LoRA) rely on a bypass framework that ignores the differential parameter budget requirements across weight matrices, which may lead to suboptimal fine-tuning outcomes. To address this issue, we introduce the Dynamic Low-Rank Adaptation (DoRA) method. DoRA decomposes high-rank LoRA layers into structured single-rank components, allowing for dynamic pruning of parameter budget based on their importance to specific tasks during training, which makes the most of the limited parameter budget. Experimental results demonstrate that DoRA can achieve competitive performance compared with LoRA and full model fine-tuning, and outperform various strong baselines with the same storage parameter budget. Our code is available at https://github.com/MIkumikumi0116/DoRA

Yulong Mao, Kaiyu Huang, Changhao Guan, Ganglin Bao, Fengran Mo, Jinan Xu• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy93.62
1460
Question AnsweringSQuAD v1.1 (dev)
F1 Score92.24
375
Reading ComprehensionRACE high
Accuracy83.39
295
Reading ComprehensionRACE mid
Accuracy86.77
196
Question AnsweringSQuAD v2.0 (dev)
F183.53
158
ReasoningPIQA
Accuracy85.75
133
Text SummarizationXsum BART-base (dev)
ROUGE-139.67
7
Showing 7 of 7 rows

Other info

Code

Follow for update