Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Fine-Tuning Pre-Trained Language Models Effectively by Optimizing Subnetworks Adaptively

About

Large-scale pre-trained language models have achieved impressive results on a wide range of downstream tasks recently. However, fine-tuning an extremely large-scale pre-trained language model on limited target datasets is often plagued by overfitting and representation degradation. In this paper, we propose a Dynamic Parameter Selection (DPS) algorithm for the large-scale pre-trained models during fine-tuning, which adaptively selects a more promising subnetwork to perform staging updates based on gradients of back-propagation. Experiments on the GLUE benchmark show that DPS outperforms previous fine-tuning methods in terms of overall performance and stability, and consistently achieves better results with variable pre-trained language models. In addition, DPS brings a large magnitude of improvement in out-of-domain transferring experiments and low-resource scenarios, which shows that it can maintain stable general contextual features and reduce the representation collapse. We release our code at https://github.com/ZhangHaojie077/DPS

Haojie Zhang, Ge Li, Jia Li, Zhongjin Zhang, Yuqi Zhu, Zhi Jin• 2022

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceRTE
Accuracy73.16
448
Natural Language UnderstandingGLUE (val)--
191
Natural Language InferenceSNLI
Accuracy84.83
180
Natural Language InferenceMNLI (matched)
Accuracy79.16
110
Natural Language InferenceMNLI--
80
Natural Language InferenceSICK
Accuracy58.18
16
Natural Language InferenceSciTail--
13
Showing 7 of 7 rows

Other info

Code

Follow for update