Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MURPHY: Multi-Turn GRPO for Self Correcting Code Generation

About

Reinforcement Learning with Verifiable Rewards(RLVR) has emerged as a powerful framework for enhancing the reasoning capabilities of large language models (LLMs). However, existing approaches such as Group Relative Policy Optimization (GRPO) and its variants, while effective on reasoning benchmarks, struggle with agentic tasks that require iterative decision-making. We introduce MURPHY, a multi-turn RLVR framework that incorporates execution feedback directly into training, extending GRPO to optimize over multi-turn trajectories where models iteratively refine solutions. MURPHY combines a feedback conditioned rollout tree with trajectory-level credit assignment, and uses pruning to reduce the cost of multi-turn optimization. Evaluations on code generation benchmarks with two model families show that MURPHY consistently improves multi-iteration performance, achieving up to an 8% absolute gain in pass@1 over compute-matched GRPO baselines, and outperforming the prior leading method that incorporates multi-turn execution feedback.

Chanakya Ekbote, Vijay Lingam, Sujay Sanghavi, Jun Huan, Behrooz Omidvar-Tehrani, Anoop Deoras, Stefano Soatto• 2025

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval
Pass@195.73
850
Code GenerationMBPP
Pass@173.33
18
Code GenerationBigCodeBench
pass@141.44
18
Showing 3 of 3 rows

Other info

Follow for update