Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

f-GRPO and Beyond: Divergence-Based Reinforcement Learning Algorithms for General LLM Alignment

About

Recent research shows that Preference Alignment (PA) objectives act as divergence estimators between aligned (chosen) and unaligned (rejected) response distributions. In this work, we extend this divergence-based perspective to general alignment settings, such as reinforcement learning with verifiable rewards (RLVR), where only environmental rewards are available. Within this unified framework, we propose f-Group Relative Policy Optimization (f-GRPO), a class of on-policy reinforcement learning, and f-Hybrid Alignment Loss (f-HAL), a hybrid on/off policy objectives, for general LLM alignment based on variational representation of f-divergences. We provide theoretical guarantees that these classes of objectives improve the average reward after alignment. Empirically, we validate our framework on both RLVR (Math Reasoning) and PA tasks (Safety Alignment), demonstrating superior performance and flexibility compared to current methods.

Rajdeep Haldar, Lantao Mei, Guang Lin, Yue Xing, Qifan Song• 2026

Related benchmarks

TaskDatasetResultRank
Math ReasoningLIMR
Relative Overall Score96.59
16
Math ReasoningGSM8K
Relative Overall Score97.22
16
Math ReasoningOpen-RS
Relative Overall Score98.47
16
Showing 3 of 3 rows

Other info

Follow for update