Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Group Causal Policy Optimization for Post-Training Large Language Models

About

Recent advances in large language models (LLMs) have broadened their applicability across diverse tasks, yet specialized domains still require targeted post training. Among existing methods, Group Relative Policy Optimization (GRPO) stands out for its efficiency, leveraging groupwise relative rewards while avoiding costly value function learning. However, GRPO treats candidate responses as independent, overlooking semantic interactions such as complementarity and contradiction. To address this challenge, we first introduce a Structural Causal Model (SCM) that reveals hidden dependencies among candidate responses induced by conditioning on a final integrated output forming a collider structure. Then, our causal analysis leads to two insights: (1) projecting responses onto a causally informed subspace improves prediction quality, and (2) this projection yields a better baseline than query only conditioning. Building on these insights, we propose Group Causal Policy Optimization (GCPO), which integrates causal structure into optimization through two key components: a causally informed reward adjustment and a novel KL regularization term that aligns the policy with a causally projected reference distribution. Comprehensive experimental evaluations demonstrate that GCPO consistently surpasses existing methods, including GRPO across multiple reasoning benchmarks.

Ziyin Gu, Jingyao Wang, Ran Zuo, Chuxiong Sun, Zeen Song, Changwen Zheng, Wenwen Qiang• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
pass@192.6
102
Mathematical ReasoningAIME 2025
Pass@153
96
Mathematical ReasoningAIME 2024
Pass@158.3
86
Mathematical ReasoningMinerva Math
pass@1 Accuracy45
82
Mathematical ReasoningMath Benchmarks Aggregate
Pass@170.9
44
Mathematical ReasoningAMC 2023
Pass@187.3
30
Showing 6 of 6 rows

Other info

Follow for update