LoRA+: Efficient Low Rank Adaptation of Large Models
About
In this paper, we show that Low Rank Adaptation (LoRA) as originally introduced in Hu et al. (2021) leads to suboptimal finetuning of models with large width (embedding dimension). This is due to the fact that adapter matrices A and B in LoRA are updated with the same learning rate. Using scaling arguments for large width networks, we demonstrate that using the same learning rate for A and B does not allow efficient feature learning. We then show that this suboptimality of LoRA can be corrected simply by setting different learning rates for the LoRA adapter matrices A and B with a well-chosen ratio. We call this proposed algorithm LoRA$+$. In our extensive experiments, LoRA$+$ improves performance (1-2 $\%$ improvements) and finetuning speed (up to $\sim$ 2X SpeedUp), at the same computational cost as LoRA.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Mathematical Reasoning | GSM8K (test) | Accuracy52.11 | 751 | |
| Code Generation | HumanEval (test) | Pass@118.17 | 444 | |
| Image Classification | GTSRB | Accuracy93.03 | 291 | |
| Image Classification | Pets | Accuracy94.07 | 204 | |
| Image Classification | FGVC Aircraft | -- | 185 | |
| Natural Language Understanding | GLUE (val) | SST-293.85 | 170 | |
| Image Classification | Flowers | Accuracy94.35 | 83 | |
| Image Classification | CIFAR10 | Accuracy94.22 | 70 | |
| Natural language generation | E2E NLG Challenge | BLEU70.2 | 58 | |
| Image Classification | FER 2013 | Top-1 Acc0.5951 | 46 |