Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Towards Generalizable Reasoning: Group Causal Counterfactual Policy Optimization for LLM Reasoning

About

Large language models (LLMs) excel at complex tasks with advances in reasoning capabilities. However, existing reward mechanisms remain tightly coupled to final correctness and pay little attention to the underlying reasoning process: trajectories with sound reasoning but wrong answers receive low credit, while lucky guesses with flawed logic may be highly rewarded, affecting reasoning generalization. From a causal perspective, we interpret multi-candidate reasoning for a fixed question as a family of counterfactual experiments with theoretical supports. Building on this, we propose Group Causal Counterfactual Policy Optimization to explicitly train LLMs to learn generalizable reasoning patterns. It proposes an episodic causal counterfactual reward that jointly captures (i) robustness, encouraging the answer distribution induced by a reasoning step to remain stable under counterfactual perturbations; and (ii) effectiveness, enforcing sufficient variability so that the learned reasoning strategy can transfer across questions. We then construct token-level advantages from this reward and optimize the policy, encouraging LLMs to favor reasoning patterns that are process-valid and counterfactually robust. Extensive experiments on diverse benchmarks demonstrate its advantages.

Jingyao Wang, Peizheng Guo, Wenwen Qiang, Jiahuan Zhou, Huijie Guo, Changwen Zheng, Hui Xiong• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
pass@193.6
102
Mathematical ReasoningAIME 2025
Pass@154.3
96
Mathematical ReasoningAIME 2024
Pass@159.1
86
Mathematical ReasoningMinerva Math
pass@1 Accuracy45.6
82
Mathematical ReasoningMath Benchmarks Aggregate
Pass@171.8
44
Mathematical ReasoningAMC 2023
Pass@188.2
30
Showing 6 of 6 rows

Other info

Follow for update