Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning

About

Recent pretrained language models extend from millions to billions of parameters. Thus the need to fine-tune an extremely large pretrained model with a limited training corpus arises in various downstream tasks. In this paper, we propose a straightforward yet effective fine-tuning technique, Child-Tuning, which updates a subset of parameters (called child network) of large pretrained models via strategically masking out the gradients of the non-child network during the backward process. Experiments on various downstream tasks in GLUE benchmark show that Child-Tuning consistently outperforms the vanilla fine-tuning by 1.5~8.6 average score among four different pretrained models, and surpasses the prior fine-tuning techniques by 0.6~1.3 points. Furthermore, empirical results on domain transfer and task transfer show that Child-Tuning can obtain better generalization performance by large margins.

Runxin Xu, Fuli Luo, Zhiyuan Zhang, Chuanqi Tan, Baobao Chang, Songfang Huang, Fei Huang• 2021

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceRTE
Accuracy70.87
448
Natural Language UnderstandingGLUE (val)--
191
Natural Language InferenceSNLI
Accuracy84.41
180
Natural Language InferenceMNLI (matched)
Accuracy79.13
110
Natural Language InferenceMNLI--
80
Question AnsweringSQuAD (val)
F1 Score88.5
26
Binary ClassificationAdvGLUE (test)
QNLI Accuracy0.496
17
Natural Language InferenceSICK
Accuracy55.69
16
Natural Language InferenceSciTail--
13
Commonsense ReasoningSWAG (val)
Accuracy83.7
9
Showing 10 of 10 rows

Other info

Follow for update