Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Data-regularized Reinforcement Learning for Diffusion Models at Scale

About

Aligning generative diffusion models with human preferences via reinforcement learning (RL) is critical yet challenging. Most existing algorithms are often vulnerable to reward hacking, such as quality degradation, over-stylization, or reduced diversity. Our analysis demonstrates that this can be attributed to the inherent limitations of their regularization, which provides unreliable penalties. We introduce Data-regularized Diffusion Reinforcement Learning (DDRL), a novel framework that uses the forward KL divergence to anchor the policy to an off-policy data distribution. Theoretically, DDRL enables robust, unbiased integration of RL with standard diffusion training. Empirically, this translates into a simple yet effective algorithm that combines reward maximization with diffusion loss minimization. With over a million GPU hours of experiments and ten thousand double-blind human evaluations, we demonstrate on high-resolution video generation tasks that DDRL significantly improves rewards while alleviating the reward hacking seen in baselines, achieving the highest human preference and establishing a robust and scalable paradigm for diffusion post-training.

Haotian Ye, Kaiwen Zheng, Jiashu Xu, Puheng Li, Huayu Chen, Jiaqi Han, Sheng Liu, Qinsheng Zhang, Hanzi Mao, Zekun Hao, Prithvijit Chattopadhyay, Dinghao Yang, Liang Feng, Maosheng Liao, Junjie Bai, Ming-Yu Liu, James Zou, Stefano Ermon• 2025

Related benchmarks

TaskDatasetResultRank
Image-to-VideoPAI-Bench VideoAlign (test)
∆-Vote (%)0.00e+0
8
Image-to-VideoPAI-Bench VBench (test)
Delta Vote (%)0.00e+0
8
Text-to-VideoPAI-Bench VideoAlign (test)
Delta Vote (%)0.00e+0
8
Text-to-VideoPAI-Bench VBench (test)
Delta Vote (%)0.00e+0
8
Text to ImageSD3.5-Medium synthetic data
OCR Accuracy82.3
4
Showing 5 of 5 rows

Other info

Follow for update