Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

DanceGRPO: Unleashing GRPO on Visual Generation

About

Recent advances in generative AI have revolutionized visual content creation, yet aligning model outputs with human preferences remains a critical challenge. While Reinforcement Learning (RL) has emerged as a promising approach for fine-tuning generative models, existing methods like DDPO and DPOK face fundamental limitations - particularly their inability to maintain stable optimization when scaling to large and diverse prompt sets, severely restricting their practical utility. This paper presents DanceGRPO, a framework that addresses these limitations through an innovative adaptation of Group Relative Policy Optimization (GRPO) for visual generation tasks. Our key insight is that GRPO's inherent stability mechanisms uniquely position it to overcome the optimization challenges that plague prior RL-based approaches on visual generation. DanceGRPO establishes several significant advances: First, it demonstrates consistent and stable policy optimization across multiple modern generative paradigms, including both diffusion models and rectified flows. Second, it maintains robust performance when scaling to complex, real-world scenarios encompassing three key tasks and four foundation models. Third, it shows remarkable versatility in optimizing for diverse human preferences as captured by five distinct reward models assessing image/video aesthetics, text-image alignment, video motion quality, and binary feedback. Our comprehensive experiments reveal that DanceGRPO outperforms baseline methods by up to 181\% across multiple established benchmarks, including HPS-v2.1, CLIP Score, VideoAlign, and GenEval. Our results establish DanceGRPO as a robust and versatile solution for scaling Reinforcement Learning from Human Feedback (RLHF) tasks in visual generation, offering new insights into harmonizing reinforcement learning and visual synthesis.

Zeyue Xue, Jie Wu, Yu Gao, Fangyuan Kong, Lingting Zhu, Mengzhao Chen, Zhiheng Liu, Wei Liu, Qiushan Guo, Weilin Huang, Ping Luo• 2025

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationGenEval
Overall Score68
506
Text-to-Video GenerationVBench
Quality Score82.83
155
Text-to-Image GenerationGenEval 1.0 (test)
Overall Score41
85
Text-to-Image GenerationHPS v2--
45
Text-to-Image AlignmentPick-a-Pic v2
Image Reward0.6156
27
Text-to-Image GenerationDrawBench
Aes.5.821
25
Text-to-Image GenerationPick-a-Pic 1K prompts v1
ImageReward1.135
20
Text-to-Image GenerationHPD v1.0 (test)
HPS v30.145
19
Text-to-Image GenerationText-to-Image Preference Evaluation Suite (HPSv2.1, ImageReward, PickScore, Aes.Pred.v2.5, CLIP, Unified Reward) v2.1
HPSv2.10.353
14
Text-to-Image GenerationOut-of-Domain T2I Dataset
Laplacian Variance5.35e+3
13
Showing 10 of 36 rows

Other info

Follow for update