Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Large Language Models Are Human-Level Prompt Engineers

About

By conditioning on natural language instructions, large language models (LLMs) have displayed impressive capabilities as general-purpose computers. However, task performance depends significantly on the quality of the prompt used to steer the model, and most effective prompts have been handcrafted by humans. Inspired by classical program synthesis and the human approach to prompt engineering, we propose Automatic Prompt Engineer (APE) for automatic instruction generation and selection. In our method, we treat the instruction as the "program," optimized by searching over a pool of instruction candidates proposed by an LLM in order to maximize a chosen score function. To evaluate the quality of the selected instruction, we evaluate the zero-shot performance of another LLM following the selected instruction. Experiments on 24 NLP tasks show that our automatically generated instructions outperform the prior LLM baseline by a large margin and achieve better or comparable performance to the instructions generated by human annotators on 19/24 tasks. We conduct extensive qualitative and quantitative analyses to explore the performance of APE. We show that APE-engineered prompts can be applied to steer models toward truthfulness and/or informativeness, as well as to improve few-shot learning performance by simply prepending them to standard in-context learning prompts. Please check out our webpage at https://sites.google.com/view/automatic-prompt-engineer.

Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, Jimmy Ba• 2022

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K (test)
Accuracy76.6
751
Question AnsweringOpenBookQA
Accuracy70.7
465
Mathematical ReasoningGSM8K
Accuracy83.43
351
Text ClassificationAG News (test)
Accuracy82.58
210
Text ClassificationSST-2 (test)
Accuracy91.23
185
Math ReasoningGSM8K (test)
Accuracy62.7
155
Medical Visual Question AnsweringSlake
Accuracy34.3
134
Subjectivity ClassificationSubj (test)
Accuracy73.92
125
Text ClassificationTREC (test)
Accuracy77.07
113
Text ClassificationMR (test)
Accuracy89.9
99
Showing 10 of 82 rows
...

Other info

Follow for update