Rethinking Preference Alignment for Diffusion Models with Classifier-Free Guidance
About
Aligning large-scale text-to-image diffusion models with nuanced human preferences remains challenging. While direct preference optimization (DPO) is simple and effective, large-scale finetuning often shows a generalization gap. We take inspiration from test-time guidance and cast preference alignment as classifier-free guidance (CFG): a finetuned preference model acts as an external control signal during sampling. Building on this view, we propose a simple method that improves alignment without retraining the base model. To further enhance generalization, we decouple preference learning into two modules trained on positive and negative data, respectively, and form a \emph{contrastive guidance} vector at inference by subtracting their predictions (positive minus negative), scaled by a user-chosen strength and added to the base prediction at each step. This yields a sharper and controllable alignment signal. We evaluate on Stable Diffusion 1.5 and Stable Diffusion XL with Pick-a-Pic v2 and HPDv3, showing consistent quantitative and qualitative gains.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-Image Generation | Pick-a-Pic v2 (test) | PickScore92.9 | 42 | |
| Text-to-Image Generation | Parti-Prompts (test) | Aesthetic Score88.4 | 21 | |
| Text-to-Image Generation | Parti-Prompts 1632 prompts (test) | PickScore (PS)74.8 | 12 | |
| Image Generation | Human Preference Evaluation 55 prompts | Votes500 | 6 |