Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LoRA-Pro: Are Low-Rank Adapters Properly Optimized?

About

Low-rank adaptation, also known as LoRA, has emerged as a prominent method for parameter-efficient fine-tuning of foundation models. Despite its computational efficiency, LoRA still yields inferior performance compared to full fine-tuning. In this paper, we first uncover a fundamental connection between the optimization processes of LoRA and full fine-tuning: using LoRA for optimization is mathematically equivalent to full fine-tuning using a low-rank gradient for parameter updates. And this low-rank gradient can be expressed in terms of the gradients of the two low-rank matrices in LoRA. Leveraging this insight, we introduce LoRA-Pro, a method that enhances LoRA's performance by strategically adjusting the gradients of these low-rank matrices. This adjustment allows the low-rank gradient to more accurately approximate the full fine-tuning gradient, thereby narrowing the performance gap between LoRA and full fine-tuning. Furthermore, we theoretically derive the optimal solutions for adjusting the gradients of the low-rank matrices, applying them during fine-tuning in LoRA-Pro. We conduct extensive experiments across natural language understanding, dialogue generation, mathematical reasoning, code generation, and image classification tasks, demonstrating that LoRA-Pro substantially improves LoRA's performance, effectively narrowing the gap with full fine-tuning. Code is publicly available at https://github.com/mrflogs/LoRA-Pro.

Zhengbo Wang, Jian Liang, Ran He, Zilei Wang, Tieniu Tan• 2024

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2 (val)
Perplexity (PPL)20.06
387
Common Sense ReasoningBoolQ
Accuracy70.8
212
Natural Language UnderstandingGLUE base (test dev)
CoLA MCC71.36
11
Subject-driven image generationDreamBooth
Fine-tuning Loss0.099
4
Showing 4 of 4 rows

Other info

Follow for update