Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Step-level Reward for Free in RL-based T2I Diffusion Model Fine-tuning

About

Recent advances in text-to-image (T2I) diffusion model fine-tuning leverage reinforcement learning (RL) to align generated images with learnable reward functions. The existing approaches reformulate denoising as a Markov decision process for RL-driven optimization. However, they suffer from reward sparsity, receiving only a single delayed reward per generated trajectory. This flaw hinders precise step-level attribution of denoising actions, undermines training efficiency. To address this, we propose a simple yet effective credit assignment framework that dynamically distributes dense rewards across denoising steps. Specifically, we track changes in cosine similarity between intermediate and final images to quantify each step's contribution on progressively reducing the distance to the final image. Our approach avoids additional auxiliary neural networks for step-level preference modeling and instead uses reward shaping to highlight denoising phases that have a greater impact on image quality. Our method achieves 1.25 to 2 times higher sample efficiency and better generalization across four human preference reward functions, without compromising the original optimal policy.

Xinyao Liao, Wei Wei, Xiaoye Qu, Yu Cheng• 2025

Related benchmarks

TaskDatasetResultRank
Compositional Image GenerationDrawBench and Task-specific (test)
GenEval0.96
4
Human Preference AlignmentDrawBench Task-specific (test)
PickScore (Task Metric)23.63
4
Visual Text RenderingDrawBench Task-specific Prompts (test)
OCR Accuracy93
4
Showing 3 of 3 rows

Other info

Follow for update