Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LoRA-SP: Streamlined Partial Parameter Adaptation for Resource-Efficient Fine-Tuning of Large Language Models

About

In addressing the computational and memory demands of fine-tuning Large Language Models(LLMs), we propose LoRA-SP(Streamlined Partial Parameter Adaptation), a novel approach utilizing randomized half-selective parameter freezing within the Low-Rank Adaptation(LoRA)framework. This method efficiently balances pre-trained knowledge retention and adaptability for task-specific optimizations. Through a randomized mechanism, LoRA-SP determines which parameters to update or freeze, significantly reducing computational and memory requirements without compromising model performance. We evaluated LoRA-SP across several benchmark NLP tasks, demonstrating its ability to achieve competitive performance with substantially lower resource consumption compared to traditional full-parameter fine-tuning and other parameter-efficient techniques. LoRA-SP innovative approach not only facilitates the deployment of advanced NLP models in resource-limited settings but also opens new research avenues into effective and efficient model adaptation strategies.

Yichao Wu, Yafei Xiang, Shuning Huo, Yulu Gong, Penghao Liang• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningPIQA
Accuracy78.97
647
Reading ComprehensionRACE high
Accuracy79.01
295
Reading ComprehensionRACE mid
Accuracy83.27
196
Common Sense ReasoningHellaSwag
Accuracy89.37
164
Common Sense ReasoningWinoGrande
Accuracy0.8367
156
Showing 5 of 5 rows

Other info

Follow for update