Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Discovering Process-Outcome Credit in Multi-Step LLM Reasoning

About

Reinforcement Learning (RL) serves as a potent paradigm for enhancing reasoning capabilities in Large Language Models (LLMs), yet standard outcome-based approaches often suffer from reward sparsity and inefficient credit assignment. In this paper, we propose a novel framework designed to provide continuous reward signals, which introduces a Step-wise Marginal Information Gain (MIG) mechanism that quantifies the intrinsic value of reasoning steps against a Monotonic Historical Watermark, effectively filtering out training noise. To ensure disentangled credit distribution, we implement a Decoupled Masking Strategy, applying process-oriented rewards specifically to the chain-of-thought (CoT) and outcome-oriented rewards to the full completion. Additionally, we incorporate a Dual-Gated SFT objective to stabilize training with high-quality structural and factual signals. Extensive experiments across textual and multi-modal benchmarks (e.g., MATH, Super-CLEVR) demonstrate that our approach consistently outperforms baselines such as GRPO in both sample efficiency and final accuracy. Furthermore, our model exhibits superior out-of-distribution robustness, demonstrating promising zero-shot transfer capabilities to unseen and challenging reasoning tasks.

Xiangwei Wang, Wei Wang, Ken Chen, Nanduni Nimalsiri, Saman Halgamuge• 2026

Related benchmarks

TaskDatasetResultRank
HallucinationHallusionBench
Pass@171
16
3D LogicSuper-CLEVR
Pass@197
3
ChartsChartQA
Pass@10.87
3
CompositionalityCoGenT
Pass@191
3
Hard Math ReasoningAIME 2025
Pass@113.33
3
Logic reasoningCommonsenseQA
Pass@169.8
3
MathCMM-Math
Pass@136
3
Math ReasoningGSM8K
Pass@1 Accuracy83.2
3
Math ReasoningHendrycks MATH
Pass@161.4
3
Math RobustnessSVAMP
Pass@188.67
3
Showing 10 of 14 rows

Other info

Follow for update