Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Parameter-Efficient Transfer Learning for NLP

About

Fine-tuning large pre-trained models is an effective transfer mechanism in NLP. However, in the presence of many downstream tasks, fine-tuning is parameter inefficient: an entire new model is required for every task. As an alternative, we propose transfer with adapter modules. Adapter modules yield a compact and extensible model; they add only a few trainable parameters per task, and new tasks can be added without revisiting previous ones. The parameters of the original network remain fixed, yielding a high degree of parameter sharing. To demonstrate adapter's effectiveness, we transfer the recently proposed BERT Transformer model to 26 diverse text classification tasks, including the GLUE benchmark. Adapters attain near state-of-the-art performance, whilst adding only a few parameters per task. On GLUE, we attain within 0.4% of the performance of full fine-tuning, adding only 3.6% parameters per task. By contrast, fine-tuning trains 100% of the parameters per task.

Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, Sylvain Gelly• 2019

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K (val)
mIoU50.78
2888
Object DetectionCOCO 2017 (val)--
2643
Image ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy82.72
1952
Instance SegmentationCOCO 2017 (val)--
1201
Question AnsweringARC Challenge
Accuracy81.98
906
Image Super-resolutionManga109
PSNR25.13
821
Image ClassificationCIFAR-100
Accuracy84
691
Natural Language InferenceSNLI (test)
Accuracy91.9
690
Image ClassificationStanford Cars
Accuracy68.6
635
Question AnsweringARC Easy
Accuracy93.84
597
Showing 10 of 407 rows
...

Other info

Code

Follow for update