Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks

About

State-of-the-art parameter-efficient fine-tuning methods rely on introducing adapter modules between the layers of a pretrained language model. However, such modules are trained separately for each task and thus do not enable sharing information across tasks. In this paper, we show that we can learn adapter parameters for all layers and tasks by generating them using shared hypernetworks, which condition on task, adapter position, and layer id in a transformer model. This parameter-efficient multi-task learning framework allows us to achieve the best of both worlds by sharing knowledge across tasks via hypernetworks while enabling the model to adapt to each individual task through task-specific adapters. Experiments on the well-known GLUE benchmark show improved performance in multi-task learning while adding only 0.29% parameters per task. We additionally demonstrate substantial performance improvements in few-shot domain generalization across a variety of tasks. Our code is publicly available in https://github.com/rabeehk/hyperformer.

Rabeeh Karimi Mahabadi, Sebastian Ruder, Mostafa Dehghani, James Henderson• 2021

Related benchmarks

TaskDatasetResultRank
Physical Interaction Question AnsweringPIQA
Accuracy55.6
323
Boolean Question AnsweringBoolQ
Accuracy73.58
307
Question AnsweringOBQA
Accuracy41
276
Sentiment AnalysisIMDB (test)
Accuracy86.6
248
Natural Language UnderstandingGLUE (val)
SST-294.03
170
Common Sense ReasoningWinoGrande
Accuracy54.93
156
Visual Question AnsweringVQA (test-dev)
Acc (All)67.5
147
Question AnsweringPubMedQA
Accuracy53
145
Question ClassificationTREC (test)
Accuracy96.92
124
Multiple-choice Question AnsweringARC Easy
Accuracy40.18
122
Showing 10 of 53 rows

Other info

Code

Follow for update