Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Exploring Versatile Generative Language Model Via Parameter-Efficient Transfer Learning

About

Fine-tuning pre-trained generative language models to down-stream language generation tasks has shown promising results. However, this comes with the cost of having a single, large model for each task, which is not ideal in low-memory/power scenarios (e.g., mobile). In this paper, we propose an effective way to fine-tune multiple down-stream generation tasks simultaneously using a single, large pre-trained model. The experiments on five diverse language generation tasks show that by just using an additional 2-3% parameters for each task, our model can maintain or even improve the performance of fine-tuning the whole model.

Zhaojiang Lin, Andrea Madotto, Pascale Fung• 2020

Related benchmarks

TaskDatasetResultRank
Natural Language UnderstandingGLUE
SST-296.6
452
Natural language generationE2E (test)
ROUGE-L89.48
79
Natural language generationE2E NLG Challenge
BLEU69.1
58
Data-to-text generationDART (test)
BLEU45.7
42
Data-to-text generationE2E
ROUGE-L0.713
36
Table-to-text generationDART
METEOR0.38
30
Natural language generationWebNLG unseen categories
BLEU49.8
17
Table-to-text generationWebNLG
BLEU (Seen)60.4
17
Natural language generationWebNLG all categories
BLEU56
11
Natural language generationWebNLG seen categories
BLEU61.1
11
Showing 10 of 13 rows

Other info

Follow for update