Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Flexora: Flexible Low Rank Adaptation for Large Language Models

About

Large Language Models (LLMs) are driving advancements in artificial intelligence by increasing the scale of model parameters, which has significantly enhanced generalization ability and unlocked new capabilities in practice. However, their performance in specific downstream tasks is usually hindered by their knowledge boundaries on these tasks. Thus, fine-tuning techniques, especially the widely used Low-Rank Adaptation (LoRA) method, have been introduced to expand the boundaries on these tasks, whereas LoRA would underperform on certain tasks owing to its potential overfitting on these tasks. To overcome this overfitting and improve the performance of LoRA, we propose the flexible low rank adaptation (Flexora) method to automatically and flexibly select the most important layers needing to be fine-tuned to achieve the best performance on different downstream tasks. Specifically, Flexora firstly frames this layer selection problem as a well-defined hyperparameter optimization (HPO) problem, then addresses it using the unrolled differentiation (UD) method, and finally selects the most useful layers based on the optimized hyperparameters. Our extensive experiments on many pretrained models and natural language tasks show that Flexora is able to consistently improve over the existing baselines, indicating the effectiveness of our Flexora in practice. We additionally provide insightful theoretical results and many ablation studies to deliver a comprehensive understanding of our Flexora.

Chenxing Wei, Yao Shu, Ying Tiffany He, Fei Richard Yu• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy96.47
1460
Commonsense ReasoningPIQA
Accuracy87.54
647
Reading ComprehensionRACE high
Accuracy88.19
295
Reading ComprehensionRACE mid
Accuracy89.9
196
Common Sense ReasoningHellaSwag
Accuracy93.87
164
Common Sense ReasoningWinoGrande
Accuracy85.79
156
ReasoningPIQA
Accuracy91.06
133
Commonsense ReasoningBoolQ, PIQA, HellaSwag, WinoGrande, ARC-e, ARC-c, OBQA (test)
BoolQ Accuracy73.54
4
Commonsense ReasoningWinoGrande
Time (h)3.84
2
Physical Commonsense ReasoningPIQA
Time (h)3.87
2
Showing 10 of 12 rows

Other info

Follow for update