Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Direct Diffusion Score Preference Optimization via Stepwise Contrastive Policy-Pair Supervision

About

Diffusion models have achieved impressive results in generative tasks such as text-to-image synthesis, yet they often struggle to fully align outputs with nuanced user intent and maintain consistent aesthetic quality. Existing preference-based training methods like Diffusion Direct Preference Optimization help address these issues but rely on costly and potentially noisy human-labeled datasets. In this work, we introduce Direct Diffusion Score Preference Optimization (DDSPO), which directly derives per-timestep supervision from winning and losing policies when such policies are available. Unlike prior methods that operate solely on final samples, DDSPO provides dense, transition-level signals across the denoising trajectory. In practice, we avoid reliance on labeled data by automatically generating preference signals using a pretrained reference model: we contrast its outputs when conditioned on original prompts versus semantically degraded variants. This practical strategy enables effective score-space preference supervision without explicit reward modeling or manual annotations. Empirical results demonstrate that DDSPO improves text-image alignment and visual quality, outperforming or matching existing preference-based methods while requiring significantly less supervision. Our implementation is available at: https://dohyun-as.github.io/DDSPO

Dohyun Kim, Seungwoo Lyu, Seung Wook Kim, Paul Hongsuck Seo• 2025

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationGenEval
GenEval Score60.49
277
Text-to-Image GenerationMS-COCO (val)
FID16.39
112
Aesthetic Quality ImprovementHPS v2 (test)
HPSv2 Score28.78
10
Aesthetic Quality ImprovementPartiPrompts v1 (test)
PickScore22.7
10
Text-to-Image AlignmentT2I-CompBench
T2I-Compbench Alignment0.5064
9
Showing 5 of 5 rows

Other info

Follow for update