Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Prototypical Calibration for Few-shot Learning of Language Models

About

In-context learning of GPT-like models has been recognized as fragile across different hand-crafted templates, and demonstration permutations. In this work, we propose prototypical calibration to adaptively learn a more robust decision boundary for zero- and few-shot classification, instead of greedy decoding. Concretely, our method first adopts Gaussian mixture distribution to estimate the prototypical clusters for all categories. Then we assign each cluster to the corresponding label by solving a weighted bipartite matching problem. Given an example, its prediction is calibrated by the likelihood of prototypical clusters. Experimental results show that prototypical calibration yields a substantial improvement on a diverse set of tasks. Extensive analysis across different scales also indicates that our method calibrates the decision boundary as expected, greatly improving the robustness of GPT to templates, permutations, and class imbalance.

Zhixiong Han, Yaru Hao, Li Dong, Yutao Sun, Furu Wei• 2022

Related benchmarks

TaskDatasetResultRank
Multi-task Language UnderstandingMMLU
Accuracy45.5
876
Natural Language InferenceRTE
Accuracy75.31
448
Question ClassificationTREC
Accuracy81.85
259
Question AnsweringARC
Accuracy59.47
230
Text ClassificationAG News (test)--
228
Topic ClassificationAG-News
Accuracy86.81
225
Sentiment AnalysisMR
Accuracy0.928
160
Sentiment AnalysisCR
Accuracy91.97
141
Sentiment AnalysisSST-5
Accuracy55.41
106
Word Sense DisambiguationWiC
Avg Accuracy57.11
87
Showing 10 of 15 rows

Other info

Follow for update