Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
About
This paper surveys and organizes research works in a new paradigm in natural language processing, which we dub "prompt-based learning". Unlike traditional supervised learning, which trains a model to take in an input x and predict an output y as P(y|x), prompt-based learning is based on language models that model the probability of text directly. To use these models to perform prediction tasks, the original input x is modified using a template into a textual string prompt x' that has some unfilled slots, and then the language model is used to probabilistically fill the unfilled information to obtain a final string x, from which the final output y can be derived. This framework is powerful and attractive for a number of reasons: it allows the language model to be pre-trained on massive amounts of raw text, and by defining a new prompting function the model is able to perform few-shot or even zero-shot learning, adapting to new scenarios with few or no labeled data. In this paper we introduce the basics of this promising paradigm, describe a unified set of mathematical notations that can cover a wide variety of existing work, and organize existing work along several dimensions, e.g.the choice of pre-trained models, prompts, and tuning strategies. To make the field more accessible to interested beginners, we not only make a systematic review of existing works and a highly structured typology of prompt-based concepts, but also release other resources, e.g., a website http://pretrain.nlpedia.ai/ including constantly-updated survey, and paperlist.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Detoxification | Jigsaw (test) | Perplexity (PPL)20.8 | 29 | |
| Topic Control | AGNews (test) | Avg Topic Accuracy82.4 | 11 | |
| Anomaly Detection | SmartHome-Bench D_Ambiguity | Accuracy43.73 | 11 | |
| Sentiment Control | IMDB (test) | Sentiment Accuracy (Avg)81.6 | 11 | |
| Anomaly Detection | SmartHome-Bench | Overall Accuracy64.99 | 11 | |
| Wound Classification | Wound-dataset 1.0 | Overall Accuracy75.5 | 11 |