Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Demystifying Prompts in Language Models via Perplexity Estimation

About

Language models can be prompted to perform a wide variety of zero- and few-shot learning problems. However, performance varies significantly with the choice of prompt, and we do not yet understand why this happens or how to pick the best prompts. In this work, we analyze the factors that contribute to this variance and establish a new empirical hypothesis: the performance of a prompt is coupled with the extent to which the model is familiar with the language it contains. Over a wide range of tasks, we show that the lower the perplexity of the prompt is, the better the prompt is able to perform the task. As a result, we devise a method for creating prompts: (1) automatically extend a small seed set of manually written prompts by paraphrasing using GPT3 and backtranslation and (2) choose the lowest perplexity prompts to get significant gains in performance.

Hila Gonen, Srini Iyer, Terra Blevins, Noah A. Smith, Luke Zettlemoyer• 2022

Related benchmarks

TaskDatasetResultRank
Language UnderstandingMMLU (test)
MMLU Average Accuracy70.5
163
Readmission predictionMIMIC IV
AUC-ROC0.4882
70
Mortality PredictionMIMIC-III
AUROC75.37
46
Readmission Prediction (RA)MIMIC-IV (test)
ROC AUC0.4856
33
Length-of-Stay PredictionMIMIC-III
Macro ROC AUC63.73
28
Data Contamination DetectionK&K
F1 Score67
16
Data Contamination DetectionSAT
F1 Score68
16
Data Contamination DetectionAIME 2025
F1 Score62
16
Data Contamination DetectionAIME 2024
F1 Score42
16
Mortality PredictionMIMIC-III (test)
AUROC63.26
14
Showing 10 of 14 rows

Other info

Follow for update