Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Reward Sharpness-Aware Fine-Tuning for Diffusion Models

About

Reinforcement learning from human feedback (RLHF) has proven effective in aligning large language models with human preferences, inspiring the development of reward-centric diffusion reinforcement learning (RDRL) to achieve similar alignment and controllability. While diffusion models can generate high-quality outputs, RDRL remains susceptible to reward hacking, where the reward score increases without corresponding improvements in perceptual quality. We demonstrate that this vulnerability arises from the non-robustness of reward model gradients, particularly when the reward landscape with respect to the input image is sharp. To mitigate this issue, we introduce methods that exploit gradients from a robustified reward model without requiring its retraining. Specifically, we employ gradients from a flattened reward model, obtained through parameter perturbations of the diffusion model and perturbations of its generated samples. Empirically, each method independently alleviates reward hacking and improves robustness, while their joint use amplifies these benefits. Our resulting framework, RSA-FT (Reward Sharpness-Aware Fine-Tuning), is simple, broadly compatible, and consistently enhances the reliability of RDRL.

Kwanyoung Kim, Byeongsu Sim• 2026

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationHPD v2 (test)
HPSv235.81
25
Text-to-Image GenerationHPD
PickScore23.08
22
Text-to-Image GenerationDrawBench
HPSV2.131.67
19
Text-to-Image AlignmentMulti-reward evaluation scenario
HPSV2.134.98
8
Text-to-Image GenerationFlux.1 1024 x 1024 (dev)
HPSv2.130.32
5
Showing 5 of 5 rows

Other info

Follow for update