Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Enhancing Large Language Model Performance with Gradient-Based Parameter Selection

About

Large language models (LLMs) have revolutionized lots of fields of research. Although it is well-known that fine-tuning is essential for enhancing the capabilities of LLMs, existing research suggests that there is potential redundancy in the fine-tuning process and therefore proposes to update only a subset of parameters. However, these methods fail to leverage the task-specific information to identify important parameters during training. Based on the insight that gradients inherently contain information on task-specific data, we propose Gradient-Mask Tuning (GMT), a method that selectively updates parameters during training based on their gradient information. Specifically, we compute the absolute values of the gradients and apply masking to those with relatively smaller magnitudes. Our empirical results across various tasks demonstrate that GMT not only outperforms traditional fine-tuning methods but also elevates the upper limits of LLM performance. Further analysis indicates that GMT exhibits insensitivity to mask ratio and possesses computational efficiency comparable to vanilla SFT.

Haoling Li, Xin Zhang, Xiao Liu, Yeyun Gong, Yifan Wang, Qi Chen, Peng Cheng• 2024

Related benchmarks

TaskDatasetResultRank
Instruction FollowingIFEval--
292
General ReasoningMMLU
MMLU Accuracy65.4
126
ChatAlpacaEval 2.0 (test)--
46
ChatMT-Bench
MT-Bench Score3.67
30
SafetyT3
T3 Score79.5
21
Machine TranslationFLORES-200 Source language en
MT Score45.5
16
SummarizationXL-SUM Target language
SUM Score22.9
16
General ReasoningGlobal MMLU
MMLU35.3
16
Machine Reading ComprehensionBelebele Source language en
MRC Score89.6
16
Machine Reading ComprehensionBELEBELE Target Language
MRC Score47.3
16
Showing 10 of 12 rows

Other info

Follow for update