Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Automatic Prompt Optimization with "Gradient Descent" and Beam Search

About

Large Language Models (LLMs) have shown impressive performance as general purpose agents, but their abilities remain highly dependent on prompts which are hand written with onerous trial-and-error effort. We propose a simple and nonparametric solution to this problem, Automatic Prompt Optimization (APO), which is inspired by numerical gradient descent to automatically improve prompts, assuming access to training data and an LLM API. The algorithm uses minibatches of data to form natural language "gradients" that criticize the current prompt. The gradients are then "propagated" into the prompt by editing the prompt in the opposite semantic direction of the gradient. These gradient descent steps are guided by a beam search and bandit selection procedure which significantly improves algorithmic efficiency. Preliminary results across three benchmark NLP tasks and the novel problem of LLM jailbreak detection suggest that Automatic Prompt Optimization can outperform prior prompt editing techniques and improve an initial prompt's performance by up to 31%, by using data to rewrite vague task descriptions into more precise annotation instructions.

Reid Pryzant, Dan Iter, Jerry Li, Yin Tat Lee, Chenguang Zhu, Michael Zeng• 2023

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy95.07
983
Mathematical ReasoningGSM8K (test)
Accuracy77.3
797
Mathematical ReasoningGSM8K (test)
Accuracy51
751
Question AnsweringOpenBookQA
Accuracy71.5
465
Text ClassificationAG News (test)
Accuracy83.73
210
Text ClassificationSST-2 (test)
Accuracy93.71
185
Math ReasoningGSM8K (test)
Accuracy63.1
155
Language UnderstandingMMLU (test)
MMLU Average Accuracy72.1
136
Medical Visual Question AnsweringSlake
Accuracy35.4
134
Subjectivity ClassificationSubj (test)
Accuracy69.8
125
Showing 10 of 67 rows

Other info

Follow for update