Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Muppet: Massive Multi-task Representations with Pre-Finetuning

About

We propose pre-finetuning, an additional large-scale learning stage between language model pre-training and fine-tuning. Pre-finetuning is massively multi-task learning (around 50 datasets, over 4.8 million total labeled examples), and is designed to encourage learning of representations that generalize better to many different tasks. We show that pre-finetuning consistently improves performance for pretrained discriminators (e.g.~RoBERTa) and generation models (e.g.~BART) on a wide range of tasks (sentence prediction, commonsense reasoning, MRC, etc.), while also significantly improving sample efficiency during fine-tuning. We also show that large-scale multi-tasking is crucial; pre-finetuning can hurt performance when few tasks are used up until a critical point (usually above 15) after which performance improves linearly in the number of tasks.

Armen Aghajanyan, Anchit Gupta, Akshat Shrivastava, Xilun Chen, Luke Zettlemoyer, Sonal Gupta• 2021

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy86.4
1891
Natural Language InferenceRTE
Accuracy39.44
448
Physical Interaction Question AnsweringPIQA
Accuracy55.47
333
Boolean Question AnsweringBoolQ
Accuracy74.27
323
Question AnsweringBoolQ
Accuracy82.17
317
Question AnsweringOBQA
Accuracy39.47
300
Question ClassificationTREC
Accuracy96.8
259
Topic ClassificationAG-News
Accuracy89.77
225
Natural Language UnderstandingGLUE (val)
SST-297.4
191
Common Sense ReasoningWinoGrande
Accuracy55.49
189
Showing 10 of 77 rows
...

Other info

Code

Follow for update