Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization

About

Recent work has shown that fine-tuning large pre-trained language models on a collection of tasks described via instructions, a.k.a. instruction-tuning, improves their zero and few-shot generalization to unseen tasks. However, there is a limited understanding of the performance trade-offs of different decisions made during the instruction-tuning process. These decisions include the scale and diversity of the instruction-tuning benchmark, different task sampling strategies, fine-tuning with and without demonstrations, training using specialized datasets for reasoning and dialogue, and finally, the fine-tuning objectives themselves. In this paper, we characterize the effect of instruction-tuning decisions on downstream task performance when scaling both model and benchmark sizes. To this end, we create OPT-IML Bench: a large benchmark for Instruction Meta-Learning (IML) of 2000 NLP tasks consolidated into task categories from 8 existing benchmarks, and prepare an evaluation framework to measure three types of model generalizations: to tasks from fully held-out categories, to held-out tasks from seen categories, and to held-out instances from seen tasks. Through the lens of this framework, we first present insights about instruction-tuning decisions as applied to OPT-30B and further exploit these insights to train OPT-IML 30B and 175B, which are instruction-tuned versions of OPT. OPT-IML demonstrates all three generalization abilities at both scales on four different evaluation benchmarks with diverse tasks and input formats -- PromptSource, FLAN, Super-NaturalInstructions, and UnifiedSKG. Not only does it significantly outperform OPT on all benchmarks but is also highly competitive with existing models fine-tuned on each specific benchmark. We release OPT-IML at both scales, together with the OPT-IML Bench evaluation framework.

Srinivasan Iyer, Xi Victoria Lin, Ramakanth Pasunuru, Todor Mihaylov, Daniel Simig, Ping Yu, Kurt Shuster, Tianlu Wang, Qing Liu, Punit Singh Koura, Xian Li, Brian O'Horo, Gabriel Pereyra, Jeff Wang, Christopher Dewan, Asli Celikyilmaz, Luke Zettlemoyer, Ves Stoyanov• 2022

Related benchmarks

TaskDatasetResultRank
Question AnsweringOpenBookQA
Accuracy79.9
465
Natural Language InferenceRTE
Accuracy82.1
367
Reading ComprehensionBoolQ
Accuracy81.7
219
Science Question AnsweringScienceQA (test)
Average Accuracy49
208
Natural Language InferenceSNLI
Accuracy67.1
174
Text-to-SQLSpider (test)--
140
Natural Language InferenceMNLI (matched)
Accuracy64.4
110
Visual Question AnsweringVQA v2 (val)
Accuracy36
99
Coreference ResolutionWSC
Accuracy73.9
96
Natural Language InferenceANLI Round 2
Accuracy43.8
64
Showing 10 of 43 rows

Other info

Follow for update