Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DreamPRM-Code: Function-as-Step Process Reward Model with Label Correction for LLM Coding

About

Process Reward Models (PRMs) have become essential for improving Large Language Models (LLMs) via test-time scaling, yet their effectiveness in coding remains limited due to the lack of meaningful step decompositions in code and the noise of Monte-Carlo-generated partial labels. We propose DreamPRM-Code, a coding-focused PRM that treats functions as reasoning steps using a Chain-of-Function prompting strategy to induce modular code generation, enabling PRM training and application analogous to mathematical reasoning tasks. To address label noise, DreamPRM-Code introduces a meta-learning-based correction mechanism that leverages clean final-solution unit-test labels and performs bi-level optimization to refine intermediate labels. Applying on test-time scaling, DreamPRM-Code achieved state-of-the-art performance on LiveCodeBench with 80.9 pass@1 rate, surpassing OpenAI o4-mini.

Ruiyi Zhang, Peijia Qin, Qi Cao, Pengtao Xie• 2025

Related benchmarks

TaskDatasetResultRank
Code GenerationLiveCodeBench
Pass@180.9
89
Code GenerationLiveCodeBench Medium--
23
Code GenerationLiveCodeBench Hard
Pass@163.9
21
Code GenerationLiveCodeBench Easy
Pass@1100
6
Showing 4 of 4 rows

Other info

Follow for update