Discovering Process-Outcome Credit in Multi-Step LLM Reasoning
About
Reinforcement Learning (RL) serves as a potent paradigm for enhancing reasoning capabilities in Large Language Models (LLMs), yet standard outcome-based approaches often suffer from reward sparsity and inefficient credit assignment. In this paper, we propose a novel framework designed to provide continuous reward signals, which introduces a Step-wise Marginal Information Gain (MIG) mechanism that quantifies the intrinsic value of reasoning steps against a Monotonic Historical Watermark, effectively filtering out training noise. To ensure disentangled credit distribution, we implement a Decoupled Masking Strategy, applying process-oriented rewards specifically to the chain-of-thought (CoT) and outcome-oriented rewards to the full completion. Additionally, we incorporate a Dual-Gated SFT objective to stabilize training with high-quality structural and factual signals. Extensive experiments across textual and multi-modal benchmarks (e.g., MATH, Super-CLEVR) demonstrate that our approach consistently outperforms baselines such as GRPO in both sample efficiency and final accuracy. Furthermore, our model exhibits superior out-of-distribution robustness, demonstrating promising zero-shot transfer capabilities to unseen and challenging reasoning tasks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Hallucination | HallusionBench | Pass@171 | 16 | |
| 3D Logic | Super-CLEVR | Pass@197 | 3 | |
| Charts | ChartQA | Pass@10.87 | 3 | |
| Compositionality | CoGenT | Pass@191 | 3 | |
| Hard Math Reasoning | AIME 2025 | Pass@113.33 | 3 | |
| Logic reasoning | CommonsenseQA | Pass@169.8 | 3 | |
| Math | CMM-Math | Pass@136 | 3 | |
| Math Reasoning | GSM8K | Pass@1 Accuracy83.2 | 3 | |
| Math Reasoning | Hendrycks MATH | Pass@161.4 | 3 | |
| Math Robustness | SVAMP | Pass@188.67 | 3 |