Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LoRA+: Efficient Low Rank Adaptation of Large Models

About

In this paper, we show that Low Rank Adaptation (LoRA) as originally introduced in Hu et al. (2021) leads to suboptimal finetuning of models with large width (embedding dimension). This is due to the fact that adapter matrices A and B in LoRA are updated with the same learning rate. Using scaling arguments for large width networks, we demonstrate that using the same learning rate for A and B does not allow efficient feature learning. We then show that this suboptimality of LoRA can be corrected simply by setting different learning rates for the LoRA adapter matrices A and B with a well-chosen ratio. We call this proposed algorithm LoRA$+$. In our extensive experiments, LoRA$+$ improves performance (1-2 $\%$ improvements) and finetuning speed (up to $\sim$ 2X SpeedUp), at the same computational cost as LoRA.

Soufiane Hayou, Nikhil Ghosh, Bin Yu• 2024

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K (test)
Accuracy52.11
751
Code GenerationHumanEval (test)
Pass@118.17
444
Image ClassificationGTSRB
Accuracy93.03
291
Image ClassificationPets
Accuracy94.07
204
Image ClassificationFGVC Aircraft--
185
Natural Language UnderstandingGLUE (val)
SST-293.85
170
Image ClassificationFlowers
Accuracy94.35
83
Image ClassificationCIFAR10
Accuracy94.22
70
Natural language generationE2E NLG Challenge
BLEU70.2
58
Image ClassificationFER 2013
Top-1 Acc0.5951
46
Showing 10 of 13 rows

Other info

Follow for update