Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning

About

Low-rank adaptation is a popular parameter-efficient fine-tuning method for large language models. In this paper, we analyze the impact of low-rank updating, as implemented in LoRA. Our findings suggest that the low-rank updating mechanism may limit the ability of LLMs to effectively learn and memorize new knowledge. Inspired by this observation, we propose a new method called MoRA, which employs a square matrix to achieve high-rank updating while maintaining the same number of trainable parameters. To achieve it, we introduce the corresponding non-parameter operators to reduce the input dimension and increase the output dimension for the square matrix. Furthermore, these operators ensure that the weight can be merged back into LLMs, which makes our method can be deployed like LoRA. We perform a comprehensive evaluation of our method across five tasks: instruction tuning, mathematical reasoning, continual pretraining, memory and pretraining. Our method outperforms LoRA on memory-intensive tasks and achieves comparable performance on other tasks.

Ting Jiang, Shaohan Huang, Shengyue Luo, Zihan Zhang, Haizhen Huang, Furu Wei, Weiwei Deng, Feng Sun, Qi Zhang, Deqing Wang, Fuzhen Zhuang• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningCommon Sense Reasoning Tasks
Avg Score78.63
316
Image ClassificationPACS
Accuracy89.09
100
Visual Task AdaptationVTAB 1K
Average Accuracy75.4
78
Commonsense ReasoningCommonsense Reasoning Suite (test)
HellaSwag Accuracy0.4353
62
Reading ComprehensionDROP (test)
F1 Score58.9
61
Mathematical ReasoningGSM8K
Accuracy67.89
57
Dialogue GenerationConvAI2
BLEU1.6
24
Commonsense ReasoningCommonsense170k (test)
BoolQ Accuracy69.05
22
Dialogue GenerationConvAI2 (test)
BLEU2.35
20
Automatic Speech RecognitionCommonVoice normative (test)
CER2.33
11
Showing 10 of 11 rows

Other info

Follow for update