Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

GPG: A Simple and Strong Reinforcement Learning Baseline for Model Reasoning

About

Reinforcement Learning (RL) can directly enhance the reasoning capabilities of large language models without extensive reliance on Supervised Fine-Tuning (SFT). In this work, we revisit the traditional Policy Gradient (PG) mechanism and propose a minimalist RL approach termed Group Policy Gradient (GPG). Unlike conventional methods, GPG directly optimize the original RL objective, thus obviating the need for surrogate loss functions. By eliminating the critic and reference models, avoiding KL divergence constraints, and addressing the advantage and gradient estimation bias, our approach significantly simplifies the training process compared to Group Relative Policy Optimization (GRPO). Our approach achieves superior performance without relying on auxiliary techniques or adjustments. As illustrated in Figure 1, extensive experiments demonstrate that our method not only reduces computational costs but also consistently outperforms GRPO across various unimodal and multimodal tasks. Our code is available at https://github.com/AMAP-ML/GPG.

Xiangxiang Chu, Hailang Huang, Xiao Zhang, Fei Wei, Yong Wang• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningAMC
Accuracy65
151
Mathematical ReasoningMinerva
Pass@127.21
138
Mathematical ReasoningAIME 24
Accuracy33.3
113
Mathematical ReasoningMATH 500
MATH 500 Accuracy80
106
Mathematical ReasoningOlympiadBench
Accuracy0.424
34
Mathematical ReasoningOlympiad
Accuracy (%)42.4
21
Visual Question AnsweringSGG-VQA Anatomy
Avg@541.09
20
Visual Question AnsweringOmniMedVQA MI
Avg@580.89
18
Mathematical ReasoningMathematical Reasoning Aggregate
Average Score37.93
18
Mathematical ReasoningOlympiad
Score37.67
17
Showing 10 of 17 rows

Other info

Follow for update