Small Models are Valuable Plug-ins for Large Language Models
About
Large language models (LLMs) such as GPT-3 and GPT-4 are powerful but their weights are often publicly unavailable and their immense sizes make the models difficult to be tuned with common hardware. As a result, effectively tuning these models with large-scale supervised data can be challenging. As an alternative, In-Context Learning (ICL) can only use a small number of supervised examples due to context length limits. In this paper, we propose Super In-Context Learning (SuperICL) which allows black-box LLMs to work with locally fine-tuned smaller models, resulting in superior performance on supervised tasks. Our experiments demonstrate that SuperICL can improve performance beyond state-of-the-art fine-tuned models while addressing the instability problem of in-context learning. Furthermore, SuperICL can enhance the capabilities of smaller models, such as multilinguality and interpretability.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Natural Language Understanding | GLUE (dev) | SST-2 (Acc)96.79 | 504 | |
| Natural Language Inference | XNLI (test) | Average Accuracy74.11 | 167 | |
| Topic Classification | AG News (test) | Accuracy88.79 | 98 | |
| Ontology Classification | DBPedia (test) | Accuracy97.63 | 53 | |
| Natural Language Inference | ANLI (test) | Overall Score47.44 | 28 | |
| Intent Classification | Banking (BANK) (test) | Accuracy73.25 | 11 | |
| Medical Diagnosis Classification | Medical Abstract (MA) (test) | Accuracy63.75 | 11 | |
| Topic Classification | TREC (test) | Accuracy81.6 | 11 | |
| Content Type Classification | RCT (test) | Accuracy67.82 | 11 | |
| Intent Classification | Banking (BANK) (train) | Accuracy69.91 | 10 |