Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Long Chain-of-Thought Compression via Fine-Grained Group Policy Optimization

About

Large Language Models (LLMs) often generate unnecessarily verbose Chain-of-Thought (CoT) reasoning that increases computational costs and latency without proportional performance gains. In this paper, we propose \textbf{F}ine-grained \textbf{G}roup policy \textbf{O}ptimization (\textbf{FGO}), a Reinforcement Learning (RL) algorithm that refines group responses by subdividing them and assigning appropriate weights based on length and entropy, thereby enabling effective CoT compression. Meanwhile, as an enhanced variant of Group Relative Policy Optimization (GRPO), FGO successfully addresses two major limitations of the GRPO: inefficient data utilization and entropy collapse. We evaluate FGO on multiple reasoning LLMs and benchmarks, including MATH500, AIME24, AMC23, and Minerva. Experimental results show that FGO achieves efficient CoT compression without degrading performance, and simultaneously resolves the key limitations of GRPO.

Xinchen Han, Hossam Afifi, Michel Marot, Xilu Wang, Lu Yin• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH500 (test)
Accuracy73.2
381
Mathematical ReasoningAMC23 (test)
Pass@155
36
Mathematical ReasoningMinerva (test)
Acc24.6
12
Showing 3 of 3 rows

Other info

Follow for update