Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

RewardFlow: Generate Images by Optimizing What You Reward

About

We introduce RewardFlow, an inversion-free framework that steers pretrained diffusion and flow-matching models at inference time through multi-reward Langevin dynamics. RewardFlow unifies complementary differentiable rewards for semantic alignment, perceptual fidelity, localized grounding, object consistency, and human preference, and further introduces a differentiable VQA-based reward that provides fine-grained semantic supervision through language-vision reasoning. To coordinate these heterogeneous objectives, we design a prompt-aware adaptive policy that extracts semantic primitives from the instruction, infers edit intent, and dynamically modulates reward weights and step sizes throughout sampling. Across several image editing and compositional generation benchmarks, RewardFlow delivers state-of-the-art edit fidelity and compositional alignment.

Onkar Susladkar, Dong-Hwan Jang, Tushar Prakash, Adheesh Juvekar, Vedant Shah, Ayush Barik, Nabeel Bashir, Muntasir Wahed, Ritish Shrirao, Ismini Lourentzou• 2026

Related benchmarks

TaskDatasetResultRank
Image EditingPIE-Bench
PSNR32.09
166
Text-to-Image GenerationT2I-CompBench (test)
Color Accuracy91
86
Text-to-Image GenerationGenEval
Overall Score91
16
Showing 3 of 3 rows

Other info

Follow for update