OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization
About
Recent work has shown that fine-tuning large pre-trained language models on a collection of tasks described via instructions, a.k.a. instruction-tuning, improves their zero and few-shot generalization to unseen tasks. However, there is a limited understanding of the performance trade-offs of different decisions made during the instruction-tuning process. These decisions include the scale and diversity of the instruction-tuning benchmark, different task sampling strategies, fine-tuning with and without demonstrations, training using specialized datasets for reasoning and dialogue, and finally, the fine-tuning objectives themselves. In this paper, we characterize the effect of instruction-tuning decisions on downstream task performance when scaling both model and benchmark sizes. To this end, we create OPT-IML Bench: a large benchmark for Instruction Meta-Learning (IML) of 2000 NLP tasks consolidated into task categories from 8 existing benchmarks, and prepare an evaluation framework to measure three types of model generalizations: to tasks from fully held-out categories, to held-out tasks from seen categories, and to held-out instances from seen tasks. Through the lens of this framework, we first present insights about instruction-tuning decisions as applied to OPT-30B and further exploit these insights to train OPT-IML 30B and 175B, which are instruction-tuned versions of OPT. OPT-IML demonstrates all three generalization abilities at both scales on four different evaluation benchmarks with diverse tasks and input formats -- PromptSource, FLAN, Super-NaturalInstructions, and UnifiedSKG. Not only does it significantly outperform OPT on all benchmarks but is also highly competitive with existing models fine-tuned on each specific benchmark. We release OPT-IML at both scales, together with the OPT-IML Bench evaluation framework.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Question Answering | OpenBookQA | Accuracy79.9 | 465 | |
| Natural Language Inference | RTE | Accuracy82.1 | 367 | |
| Reading Comprehension | BoolQ | Accuracy81.7 | 219 | |
| Science Question Answering | ScienceQA (test) | Average Accuracy49 | 208 | |
| Natural Language Inference | SNLI | Accuracy67.1 | 174 | |
| Text-to-SQL | Spider (test) | -- | 140 | |
| Natural Language Inference | MNLI (matched) | Accuracy64.4 | 110 | |
| Visual Question Answering | VQA v2 (val) | Accuracy36 | 99 | |
| Coreference Resolution | WSC | Accuracy73.9 | 96 | |
| Natural Language Inference | ANLI Round 2 | Accuracy43.8 | 64 |