Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Select Smarter, Not More: Prompt-Aware Evaluation Scheduling with Submodular Guarantees

About

Automatic prompt optimization (APO) hinges on the quality of its evaluation signal, yet scoring every prompt candidate on the full training set is prohibitively expensive. Existing methods either fix a single evaluation subset before optimization begins (principled but prompt-agnostic) or adapt it heuristically during optimization (flexible but unstable and lacking formal guarantees). We observe that APO naturally maps to an online adaptive testing problem: prompts are examinees, training examples are test items, and the scheduler should select items that best discriminate among the strongest candidates. This insight motivates Prompt-Aware Online Evaluation Scheduling (POES), which integrates an IRT-based discrimination utility, a facility-location coverage term, and switching-cost-aware warm-start swaps into a unified objective that is provably monotone submodular, yielding a (1-1/e) greedy guarantee for cold starts and bounded drift for warm-start updates. An adaptive controller modulates the exploration-exploitation balance based on optimization progress. Across 36 tasks spanning three benchmark families, POES achieves the highest overall average accuracy (6.2 percent improvement over the best baseline) with negligible token overhead (approximately 4 percent) at the same evaluation budget. Moreover, principled selection at k = 20 examples matches or exceeds the performance of naive evaluation at k = 30-50, reducing token consumption by 35-60 percent, showing that selecting smarter is more effective than selecting more. Our results demonstrate that evaluation scheduling is a first-class component of APO, not an implementation detail.

Xiaoyu Ma, Yiwen Li, Haoyue Liu, Zhichao Wang, Ye Chen, Yongxin Guo, Xiaoying Tang• 2026

Related benchmarks

TaskDatasetResultRank
ReasoningBBH
Accuracy89.3
672
MathGSM8K
Accuracy0.972
206
MathematicsMATH
MATH Accuracy92.7
85
Math ReasoningGSM-Hard
Accuracy82.2
67
Math ReasoningMultiArith
Accuracy98.3
65
Knowledge ReasoningMMLU
MMLU Knowledge Reasoning Accuracy77.9
65
General ReasoningBIG-bench
Accuracy (General)81.6
36
Performance Ranking36 main tasks
Rank-1 Count14
6
Showing 8 of 8 rows

Other info

Follow for update