Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Optimizing Instructions and Demonstrations for Multi-Stage Language Model Programs

About

Language Model Programs, i.e. sophisticated pipelines of modular language model (LM) calls, are increasingly advancing NLP tasks, but they require crafting prompts that are jointly effective for all modules. We study prompt optimization for LM programs, i.e. how to update these prompts to maximize a downstream metric without access to module-level labels or gradients. To make this tractable, we factorize our problem into optimizing the free-form instructions and few-shot demonstrations of every module and introduce several strategies to craft task-grounded instructions and navigate credit assignment across modules. Our strategies include (i) program- and data-aware techniques for proposing effective instructions, (ii) a stochastic mini-batch evaluation function for learning a surrogate model of our objective, and (iii) a meta-optimization procedure in which we refine how LMs construct proposals over time. Using these insights we develop MIPRO, a novel algorithm for optimizing LM programs. MIPRO outperforms baseline optimizers on five of seven diverse multi-stage LM programs using a best-in-class open-source model (Llama-3-8B), by as high as 13% accuracy. We have released our new optimizers and benchmark in DSPy at http://dspy.ai

Krista Opsahl-Ong, Michael J Ryan, Josh Purtell, David Broman, Christopher Potts, Matei Zaharia, Omar Khattab• 2024

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval
Pass@196.95
1036
Mathematical ReasoningGSM8K (test)
Accuracy92.8
900
Code GenerationHumanEval (test)--
506
Code GenerationMBPP (test)--
298
Multi-hop Question AnsweringHotpotQA (test)--
255
Code GenerationMBPP
Pass@177.92
193
Code GenerationAPPS
Pass@135.33
69
Logical reasoningFOLIO (test)
Accuracy82.33
58
Code GenerationCodeContests
Pass@148.28
38
Question AnsweringHotpotQA (test)--
37
Showing 10 of 51 rows

Other info

Follow for update