Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Sparse is Enough in Fine-tuning Pre-trained Large Language Models

About

With the prevalence of pre-training-fine-tuning paradigm, how to efficiently adapt the pre-trained model to the downstream tasks has been an intriguing issue. Parameter-Efficient Fine-Tuning (PEFT) methods have been proposed for low-cost adaptation. Although PEFT has demonstrated effectiveness and been widely applied, the underlying principles are still unclear. In this paper, we adopt the PAC-Bayesian generalization error bound, viewing pre-training as a shift of prior distribution which leads to a tighter bound for generalization error. We validate this shift from the perspectives of oscillations in the loss landscape and the quasi-sparsity in gradient distribution. Based on this, we propose a gradient-based sparse fine-tuning algorithm, named Sparse Increment Fine-Tuning (SIFT), and validate its effectiveness on a range of tasks including the GLUE Benchmark and Instruction-tuning. The code is accessible at https://github.com/song-wx/SIFT/.

Weixi Song, Zuchao Li, Lefei Zhang, Hai Zhao, Bo Du• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1K
Top-1 Acc80.86
1239
Image ClassificationCIFAR-100
Accuracy91.45
435
Natural Language UnderstandingGLUE
SST-295.18
55
Image ClassificationCIFAR-10
Accuracy99.09
5
Showing 4 of 4 rows

Other info

Follow for update