Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners

About

Meta-training, which fine-tunes the language model (LM) on various downstream tasks by maximizing the likelihood of the target label given the task instruction and input instance, has improved the zero-shot task generalization performance. However, meta-trained LMs still struggle to generalize to challenging tasks containing novel labels unseen during meta-training. In this paper, we propose Flipped Learning, an alternative method of meta-training which trains the LM to generate the task instruction given the input instance and label. During inference, the LM trained with Flipped Learning, referred to as Flipped, selects the label option that is most likely to generate the task instruction. On 14 tasks of the BIG-bench benchmark, the 11B-sized Flipped outperforms zero-shot T0-11B and even a 16 times larger 3-shot GPT-3 (175B) on average by 8.4% and 9.7% points, respectively. Flipped gives particularly large improvements on tasks with unseen labels, outperforming T0-11B by up to +20% average F1 score. This indicates that the strong task generalization of Flipped comes from improved generalization to novel labels. We release our code at https://github.com/seonghyeonye/Flipped-Learning.

Seonghyeon Ye, Doyoung Kim, Joel Jang, Joongbo Shin, Minjoon Seo• 2022

Related benchmarks

TaskDatasetResultRank
Question AnsweringARC Challenge--
749
Question AnsweringOpenBookQA
Accuracy72.54
465
Physical Commonsense ReasoningPIQA
Accuracy71.65
329
Common Sense ReasoningWinoGrande
Accuracy66.57
156
Common Sense ReasoningCOPA
Accuracy90.75
138
Sentence CompletionHellaSwag
Accuracy41.97
133
Word Sense DisambiguationWiC--
84
Story completionStoryCloze
Accuracy96.12
65
Various NLP tasks (NLU and Reasoning)BIG-bench (unseen)
Known Unknowns Score86.96
15
General Language UnderstandingP3 v1 (unseen)
RTE Accuracy71.05
11
Showing 10 of 16 rows

Other info

Code

Follow for update