Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

PromptLoop: Plug-and-Play Prompt Refinement via Latent Feedback for Diffusion Model Alignment

About

Despite recent progress, reinforcement learning (RL)-based fine-tuning of diffusion models often struggles with generalization, composability, and robustness against reward hacking. Recent studies have explored prompt refinement as a modular alternative, but most adopt a feed-forward approach that applies a single refined prompt throughout the entire sampling trajectory, thereby failing to fully leverage the sequential nature of reinforcement learning. To address this, we introduce PromptLoop, a plug-and-play RL framework that incorporates latent feedback into step-wise prompt refinement. Rather than modifying diffusion model weights, a multimodal large language model (MLLM) is trained with RL to iteratively update prompts based on intermediate latent states of diffusion models. This design achieves a structural analogy to the Diffusion RL approach, while retaining the flexibility and generality of prompt-based alignment. Extensive experiments across diverse reward functions and diffusion backbones demonstrate that PromptLoop (i) achieves effective reward optimization, (ii) generalizes seamlessly to unseen models, (iii) composes orthogonally with existing alignment methods, and (iv) mitigates over-optimization and reward hacking while introducing only a practically negligible inference overhead.

Suhyeon Lee, Jong Chul Ye• 2025

Related benchmarks

TaskDatasetResultRank
Text-to-Image AlignmentPick-a-Pic v2
Image Reward1.2898
27
Text-to-Image Alignmentprompt dataset (test)
GenEval55.05
9
Text-to-Image GenerationFLUX.1 dev 12B (test)
Image Reward1.258
3
Text-to-Image GenerationSD large 3.5 8B (test)
Image Reward1.254
3
Showing 4 of 4 rows

Other info

Follow for update