Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MA-SAPO: Multi-Agent Reasoning for Score-Aware Prompt Optimization

About

Prompt optimization has become a practical way to improve the performance of Large Language Models (LLMs) without retraining. However, most existing frameworks treat evaluation as a black box, relying solely on outcome scores without explaining why prompts succeed or fail. Moreover, they involve repetitive trial-and-error refinements that remain implicit, offering limited interpretability or actionable guidance for systematic improvement. In this paper, we propose MA-SAPO: a new Multi-Agent Reasoning for Score Aware Prompt Optimization framework that links evaluation outcomes directly to targeted refinements. Specifically, in the Training Phase, multiple agents interpret evaluation scores, diagnose weaknesses, and generate concrete revision directives, which are stored as reusable reasoning assets. In the Test Phase, an analyzer agent retrieves relevant exemplars and assets for a new prompt, and a refiner agent applies evidence-based edits to improve the prompt and its response. By grounding optimization in structured reasoning, MA-SAPO ensures edits are interpretable, auditable, and controllable. Experiments on the HelpSteer1/2 benchmarks show that our framework consistently outperforms single-pass prompting, retrieval-augmented generation, and prior multi-agent methods across multiple evaluation metrics.

Wonduk Seo, Juhyeon Lee, Junseo Koh, Wonseok Choi, Hyunjin An, Jian Park, Seunghyun lee, Haihua Chen, Yi Bu• 2025

Related benchmarks

TaskDatasetResultRank
Prompt Optimization EvaluationHelpSteer 1
Helpfulness51.83
14
Prompt Optimization EvaluationHelpSteer2
Helpfulness0.5072
14
Reasoning Quality EvaluationHelpSteer1 sampled (train)
Usefulness Score3.89
2
Showing 3 of 3 rows

Other info

Follow for update