Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Small Models are Valuable Plug-ins for Large Language Models

About

Large language models (LLMs) such as GPT-3 and GPT-4 are powerful but their weights are often publicly unavailable and their immense sizes make the models difficult to be tuned with common hardware. As a result, effectively tuning these models with large-scale supervised data can be challenging. As an alternative, In-Context Learning (ICL) can only use a small number of supervised examples due to context length limits. In this paper, we propose Super In-Context Learning (SuperICL) which allows black-box LLMs to work with locally fine-tuned smaller models, resulting in superior performance on supervised tasks. Our experiments demonstrate that SuperICL can improve performance beyond state-of-the-art fine-tuned models while addressing the instability problem of in-context learning. Furthermore, SuperICL can enhance the capabilities of smaller models, such as multilinguality and interpretability.

Canwen Xu, Yichong Xu, Shuohang Wang, Yang Liu, Chenguang Zhu, Julian McAuley• 2023

Related benchmarks

TaskDatasetResultRank
Natural Language UnderstandingGLUE (dev)
SST-2 (Acc)96.79
504
Natural Language InferenceXNLI (test)
Average Accuracy74.11
167
Topic ClassificationAG News (test)
Accuracy88.79
98
Ontology ClassificationDBPedia (test)
Accuracy97.63
53
Natural Language InferenceANLI (test)
Overall Score47.44
28
Intent ClassificationBanking (BANK) (test)
Accuracy73.25
11
Medical Diagnosis ClassificationMedical Abstract (MA) (test)
Accuracy63.75
11
Topic ClassificationTREC (test)
Accuracy81.6
11
Content Type ClassificationRCT (test)
Accuracy67.82
11
Intent ClassificationBanking (BANK) (train)
Accuracy69.91
10
Showing 10 of 15 rows

Other info

Code

Follow for update