Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Rethinking Preference Alignment for Diffusion Models with Classifier-Free Guidance

About

Aligning large-scale text-to-image diffusion models with nuanced human preferences remains challenging. While direct preference optimization (DPO) is simple and effective, large-scale finetuning often shows a generalization gap. We take inspiration from test-time guidance and cast preference alignment as classifier-free guidance (CFG): a finetuned preference model acts as an external control signal during sampling. Building on this view, we propose a simple method that improves alignment without retraining the base model. To further enhance generalization, we decouple preference learning into two modules trained on positive and negative data, respectively, and form a \emph{contrastive guidance} vector at inference by subtracting their predictions (positive minus negative), scaled by a user-chosen strength and added to the base prediction at each step. This yields a sharper and controllable alignment signal. We evaluate on Stable Diffusion 1.5 and Stable Diffusion XL with Pick-a-Pic v2 and HPDv3, showing consistent quantitative and qualitative gains.

Zhou Jiang, Yandong Wen, Zhen Liu• 2026

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationPick-a-Pic v2 (test)
PickScore92.9
42
Text-to-Image GenerationParti-Prompts (test)
Aesthetic Score88.4
21
Text-to-Image GenerationParti-Prompts 1632 prompts (test)
PickScore (PS)74.8
12
Image GenerationHuman Preference Evaluation 55 prompts
Votes500
6
Showing 4 of 4 rows

Other info

Follow for update