MASPRM: Multi-Agent System Process Reward Model
About
Practical deployment of multi-agent systems (MAS) demands strong performance at test time, motivating methods that guide search during inference and selectively spend compute to improve quality. We present the Multi-Agent System Process Reward Model (MASPRM). It assigns values to partial inter-agent transcripts for each action and each agent, and acts as a controller during inference. MASPRM is trained from multi-agent Monte Carlo Tree Search (MCTS) rollouts labeled only with terminal outcome rewards, without requiring human step-level annotations, by propagating returns to local targets. During inference, MASPRM guides step-level beam search (SBS) and MCTS, focusing computation on promising branches and pruning unpromising ones. We train and test MASPRM across different tasks and domains, using GSM8K, MATH, MMLU, and LogiQA as benchmarks. Averaged across these benchmarks, MASPRM improves Hit@1 over policy likelihood by up to $+13.4$ points and improves ranking quality, reducing Hit@1$->$Hit@5 gaps by up to $10.3$ points. MASPRM complements inference-time search by scoring intermediate routed transcripts to guide rollouts in MAS with fixed schedules. Code: https://github.com/milad1378yz/MASPRM
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Language Understanding | MMLU | Accuracy75.2 | 756 | |
| Mathematical Reasoning | GSM8K | -- | 351 | |
| Mathematical Reasoning | MATH | Pass@151.8 | 112 | |
| Multi-task Language Understanding | MMLU (test) | Normalized Accuracy75.2 | 76 | |
| Mathematical Reasoning | GSM8K (test) | Hit@1 Accuracy82.9 | 16 | |
| Mathematical Reasoning | MATH (test) | Hit@151.8 | 16 | |
| Logical reasoning | LogiQA | Hit@1 (LOGIQA)45.5 | 7 |