Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Reinforcing Diffusion Models by Direct Group Preference Optimization

About

While reinforcement learning methods such as Group Relative Preference Optimization (GRPO) have significantly enhanced Large Language Models, adapting them to diffusion models remains challenging. In particular, GRPO demands a stochastic policy, yet the most cost-effective diffusion samplers are based on deterministic ODEs. Recent work addresses this issue by using inefficient SDE-based samplers to induce stochasticity, but this reliance on model-agnostic Gaussian noise leads to slow convergence. To resolve this conflict, we propose Direct Group Preference Optimization (DGPO), a new online RL algorithm that dispenses with the policy-gradient framework entirely. DGPO learns directly from group-level preferences, which utilize relative information of samples within groups. This design eliminates the need for inefficient stochastic policies, unlocking the use of efficient deterministic ODE samplers and faster training. Extensive results show that DGPO trains around 20 times faster than existing state-of-the-art methods and achieves superior performance on both in-domain and out-of-domain reward metrics. Code is available at https://github.com/Luo-Yihong/DGPO.

Yihong Luo, Tianyang Hu, Jing Tang• 2025

Related benchmarks

TaskDatasetResultRank
Compositional Image GenerationGenEval
Overall Score0.97
44
Image GenerationDrawBench
Aesthetic Score5.37
10
Visual Text RenderingVisual Text Rendering
OCR Accuracy96
8
Showing 3 of 3 rows

Other info

GitHub

Follow for update