Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Deep Reward Supervisions for Tuning Text-to-Image Diffusion Models

About

Optimizing a text-to-image diffusion model with a given reward function is an important but underexplored research area. In this study, we propose Deep Reward Tuning (DRTune), an algorithm that directly supervises the final output image of a text-to-image diffusion model and back-propagates through the iterative sampling process to the input noise. We find that training earlier steps in the sampling process is crucial for low-level rewards, and deep supervision can be achieved efficiently and effectively by stopping the gradient of the denoising network input. DRTune is extensively evaluated on various reward models. It consistently outperforms other algorithms, particularly for low-level control signals, where all shallow supervision methods fail. Additionally, we fine-tune Stable Diffusion XL 1.0 (SDXL 1.0) model via DRTune to optimize Human Preference Score v2.1, resulting in the Favorable Diffusion XL 1.0 (FDXL 1.0) model. FDXL 1.0 significantly enhances image quality compared to SDXL 1.0 and reaches comparable quality compared with Midjourney v5.2.

Xiaoshi Wu, Yiming Hao, Manyuan Zhang, Keqiang Sun, Zhaoyang Huang, Guanglu Song, Yu Liu, Hongsheng Li• 2024

Related benchmarks

TaskDatasetResultRank
Text-to-motion generationHumanML3D (test)
FID0.313
481
Text-to-Image GenerationHPD v2 (test)
HPSv234.93
25
Text-to-Image GenerationHPD
PickScore22.78
22
Text-to-Image GenerationDrawBench
HPSV2.130.63
19
Text-to-Image AlignmentMulti-reward evaluation scenario
HPSV2.134.16
8
Text-to-Image GenerationFlux.1 1024 x 1024 (dev)
HPSv2.129.99
5
Showing 6 of 6 rows

Other info

Follow for update