In-Context Learning Creates Task Vectors
About
In-context learning (ICL) in Large Language Models (LLMs) has emerged as a powerful new learning paradigm. However, its underlying mechanism is still not well understood. In particular, it is challenging to map it to the "standard" machine learning framework, where one uses a training set $S$ to find a best-fitting function $f(x)$ in some hypothesis class. Here we make progress on this problem by showing that the functions learned by ICL often have a very simple structure: they correspond to the transformer LLM whose only inputs are the query $x$ and a single "task vector" calculated from the training set. Thus, ICL can be seen as compressing $S$ into a single task vector $\boldsymbol{\theta}(S)$ and then using this task vector to modulate the transformer to produce the output. We support the above claim via comprehensive experiments across a range of models and tasks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text Classification | AG-News | Accuracy57.9 | 248 | |
| Topic Classification | AG-News | Accuracy58.9 | 173 | |
| Commonsense Question Answering | CommonsenseQA | Accuracy22 | 81 | |
| Semantic Antonym Prediction | Antonym | Accuracy65.7 | 44 | |
| Machine Translation | English-French | Accuracy73.8 | 42 | |
| Sentiment Classification | Sentiment classification | Acc77.1 | 32 | |
| Knowledge Retrieval / Relation Prediction | Person-Instrument | Accuracy0.706 | 30 | |
| Named Entity Recognition | NER person | Accuracy0.626 | 26 | |
| Named Entity Recognition | NER location | Accuracy41.9 | 26 | |
| Named Entity Recognition | NER organization | Accuracy51.1 | 26 |