Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

EDGE-GRPO: Entropy-Driven GRPO with Guided Error Correction for Advantage Diversity

About

Large Language Models (LLMs) have made remarkable progress in enhancing step-by-step reasoning through reinforcement learning. However, the Group Relative Policy Optimization (GRPO) algorithm, which relies on sparse reward rules, often encounters the issue of identical rewards within groups, leading to the advantage collapse problem. Existing works typically address this challenge from two perspectives: enforcing model reflection to enhance response diversity, and introducing internal feedback to augment the training signal (advantage). In this work, we begin by analyzing the limitations of model reflection and investigating the policy entropy of responses at the fine-grained sample level. Based on our experimental findings, we propose the EDGE-GRPO algorithm, which adopts \textbf{E}ntropy-\textbf{D}riven Advantage and \textbf{G}uided \textbf{E}rror Correction to effectively mitigate the problem of advantage collapse. Extensive experiments on several main reasoning benchmarks demonstrate the effectiveness and superiority of our approach. It is available at https://github.com/ZhangXJ199/EDGE-GRPO.

Xingjian Zhang, Siwei Wen, Wenjun Wu, Lei Huang• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH 500
Accuracy94.2
27
General Knowledge ReasoningMMLU-Pro
Accuracy (MMLU-Pro)73.86
27
Science ReasoningGPQA Diam
Accuracy55.56
27
Mathematical ReasoningAIME 24/25
Accuracy68.87
27
Showing 4 of 4 rows

Other info

Follow for update