Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LoRA-FA: Memory-efficient Low-rank Adaptation for Large Language Models Fine-tuning

About

The low-rank adaptation (LoRA) method can largely reduce the amount of trainable parameters for fine-tuning large language models (LLMs), however, it still requires expensive activation memory to update low-rank weights. Reducing the number of LoRA layers or using activation recomputation could harm the fine-tuning performance or increase the computational overhead. In this work, we present LoRA-FA, a memory-efficient fine-tuning method that reduces the activation memory without performance degradation and expensive recomputation. LoRA-FA chooses to freeze the projection-down weight of $A$ and update the projection-up weight of $B$ in each LoRA layer. It ensures the change of model weight reside in a low-rank space during LLMs fine-tuning, while eliminating the requirement to store full-rank input activations. We conduct extensive experiments across multiple model types (RoBERTa, T5, LLaMA) and model scales. Our results show that LoRA-FA can always achieve close fine-tuning accuracy across different tasks compared to full parameter fine-tuning and LoRA. Furthermore, LoRA-FA can reduce the overall memory cost by up to 1.4$\times$ compared to LoRA.

Longteng Zhang, Lin Zhang, Shaohuai Shi, Xiaowen Chu, Bo Li• 2023

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval
Pass@115.91
1036
Commonsense ReasoningPIQA
Accuracy75.97
751
Natural Language UnderstandingGLUE
SST-293.65
531
Reading ComprehensionRACE high
Accuracy79.03
295
Common Sense ReasoningHellaSwag
Accuracy89.16
213
Reading ComprehensionRACE mid
Accuracy82.79
196
Common Sense ReasoningWinoGrande
Accuracy0.8216
189
Mathematical ReasoningGSM8K (val)
Accuracy40.25
81
Code GenerationMBPP
Pass@1 Accuracy20.01
59
Mathematical ReasoningMATH (val)
Accuracy5.66
48
Showing 10 of 31 rows

Other info

Follow for update