Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Towards a Unified View of Parameter-Efficient Transfer Learning

About

Fine-tuning large pre-trained language models on downstream tasks has become the de-facto learning paradigm in NLP. However, conventional approaches fine-tune all the parameters of the pre-trained model, which becomes prohibitive as the model size and the number of tasks grow. Recent work has proposed a variety of parameter-efficient transfer learning methods that only fine-tune a small number of (extra) parameters to attain strong performance. While effective, the critical ingredients for success and the connections among the various methods are poorly understood. In this paper, we break down the design of state-of-the-art parameter-efficient transfer learning methods and present a unified framework that establishes connections between them. Specifically, we re-frame them as modifications to specific hidden states in pre-trained models, and define a set of design dimensions along which different methods vary, such as the function to compute the modification and the position to apply the modification. Through comprehensive empirical studies across machine translation, text summarization, language understanding, and text classification benchmarks, we utilize the unified view to identify important design choices in previous methods. Furthermore, our unified framework enables the transfer of design elements across different approaches, and as a result we are able to instantiate new parameter-efficient fine-tuning methods that tune less parameters than previous methods while being more effective, achieving comparable results to fine-tuning all parameters on all four tasks.

Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg-Kirkpatrick, Graham Neubig• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy91.7
3518
Commonsense ReasoningPIQA
Accuracy69.8
647
Natural Language UnderstandingGLUE
SST-296.1
452
Natural Language UnderstandingGLUE (test)
SST-2 Accuracy83.9
416
Text-to-Video RetrievalDiDeMo (test)
R@136.4
376
Question AnsweringOBQA
Accuracy76.3
276
Commonsense ReasoningCommon Sense Reasoning Tasks
Avg Score81.4
241
Science Question AnsweringARC Challenge
Accuracy54.2
234
Reading ComprehensionBoolQ
Accuracy75.3
219
Mathematical ReasoningGSM8K
Accuracy56.4
212
Showing 10 of 33 rows

Other info

Follow for update