Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models

About

Low-Rank Adaptation (LoRA) emerges as a popular parameter-efficient fine-tuning (PEFT) method, which proposes to freeze pretrained model weights and update an additive low-rank trainable matrix. In this work, we study the enhancement of LoRA training by introducing an $r \times r$ preconditioner in each gradient step where $r$ is the LoRA rank. We theoretically verify that the proposed preconditioner stabilizes feature learning with LoRA under infinite-width NN setting. Empirically, the implementation of this new preconditioner requires a small change to existing optimizer code and creates virtually minuscule storage and runtime overhead. Our experimental results with both large language models and text-to-image diffusion models show that with this new preconditioner, the convergence and reliability of SGD and AdamW can be significantly enhanced. Moreover, the training process becomes much more robust to hyperparameter choices such as learning rate. The new preconditioner can be derived from a novel Riemannian metric in low-rank matrix field. Code can be accessed at https://github.com/pilancilab/Riemannian_Preconditioned_LoRA.

Fangzhao Zhang, Mert Pilanci• 2024

Related benchmarks

TaskDatasetResultRank
Common Sense ReasoningBoolQ
Accuracy71.47
131
Natural Language UnderstandingGLUE
MNLI85.67
6
Showing 2 of 2 rows

Other info

Follow for update