Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Skip-Connected Policy Optimization for Implicit Advantage

About

Group Relative Policy Optimization (GRPO) has proven effective in RLVR by using outcome-based rewards. While fine-grained dense rewards can theoretically improve performance, we reveal that under practical sampling budgets, Monte Carlo estimation yields high-variance and sign-inconsistent advantages for early reasoning tokens, paradoxically underperforming outcome-only GRPO. We propose Skip-Connected Optimization (SKPO), which decomposes reasoning into upstream and downstream phases: upstream receives dense rewards from downstream Monte Carlo sampling with single-stream optimization; downstream maintains group-relative optimization, where a skip connection concatenates the upstream segment with the original problem, enabling the model to leverage helpful upstream reasoning while preserving the freedom to bypass flawed reasoning through direct problem access. Experiments demonstrate improvements of 3.91% and 6.17% relative gains over the strongest baselines on Qwen2.5-Math-7B and Llama-3.2-3B respectively across mathematical benchmarks and out-of-domain tasks including general reasoning and code generation. Further analysis reveals an implicit advantage: SKPO generates trajectories with higher intermediate-step quality even when matched for final correctness.

Fengwei Teng, Jinyi Bai, Xinhao Yao, Demi Ruohan Wang, Jiahao Zhao, Zhijiang Guo• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH
Accuracy80.8
882
General KnowledgeMMLU
MMLU General Knowledge Accuracy30.1
234
Mathematical ReasoningAMC
Accuracy (ACC)71.4
203
Mathematical ReasoningMinerva Math
Accuracy34.7
186
Mathematical ReasoningAIME 2024
Accuracy35.7
151
Code GenerationLiveCodeBench
Accuracy19.6
60
Mathematical ReasoningAIME 2025
Accuracy18.7
40
Showing 7 of 7 rows

Other info

Follow for update