Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Design Conditions for Intra-Group Learning of Sequence-Level Rewards: Token Gradient Cancellation

About

In sparse termination rewards, intra-group comparisons have become the dominant paradigm for fine-tuning reasoning models via reinforcement learning. However, long-term training often leads to issues like ineffective update accumulation (learning tax), solution probability drift, and entropy collapse. This paper presents a necessary condition for algorithm design from a token-level credit assignment perspective: to prevent reward-irrelevant drift, intra-group objectives must maintain gradient exchangeability across token updates, enabling gradient cancellation on weak-credit/high-frequency tokens. We show that two common mechanisms disrupting exchangeability make "non-cancellation" a structural norm. Based on this, we propose minimal intra-group transformations to restore or approximate the cancellation structure in the shared token space. Experimental results demonstrate that these transformations stabilize training, improve sample efficiency, and enhance final performance, validating the value of this design condition.

Fei Ding, Yongkang Zhang, youwei wang, Zijian Zeng• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningHMMT 2025--
70
Code GenerationLiveCodeBench
Rate @32 Score75.2
17
Mathematical ReasoningAIME 2025
Accuracy (avg@32)93.2
10
Showing 3 of 3 rows

Other info

Follow for update