Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Prototypical Calibration for Few-shot Learning of Language Models

About

In-context learning of GPT-like models has been recognized as fragile across different hand-crafted templates, and demonstration permutations. In this work, we propose prototypical calibration to adaptively learn a more robust decision boundary for zero- and few-shot classification, instead of greedy decoding. Concretely, our method first adopts Gaussian mixture distribution to estimate the prototypical clusters for all categories. Then we assign each cluster to the corresponding label by solving a weighted bipartite matching problem. Given an example, its prediction is calibrated by the likelihood of prototypical clusters. Experimental results show that prototypical calibration yields a substantial improvement on a diverse set of tasks. Extensive analysis across different scales also indicates that our method calibrates the decision boundary as expected, greatly improving the robustness of GPT to templates, permutations, and class imbalance.

Zhixiong Han, Yaru Hao, Li Dong, Yutao Sun, Furu Wei• 2022

Related benchmarks

TaskDatasetResultRank
Multi-task Language UnderstandingMMLU
Accuracy45.5
842
Natural Language InferenceRTE
Accuracy75.31
367
Text ClassificationAG News (test)--
210
Question ClassificationTREC
Accuracy81.85
205
Topic ClassificationAG-News
Accuracy86.81
173
Question AnsweringARC
Accuracy59.47
154
Sentiment AnalysisMR
Accuracy0.928
142
Sentiment AnalysisCR
Accuracy91.97
123
Word Sense DisambiguationWiC
Avg Accuracy57.11
84
Sentiment AnalysisSST-5
Accuracy55.41
47
Showing 10 of 15 rows

Other info

Follow for update