Margin-aware Preference Optimization for Aligning Diffusion Models without Reference
About
Modern preference alignment methods, such as DPO, rely on divergence regularization to a reference model for training stability-but this creates a fundamental problem we call "reference mismatch." In this paper, we investigate the negative impacts of reference mismatch in aligning text-to-image (T2I) diffusion models, showing that larger reference mismatch hinders effective adaptation given the same amount of data, e.g., as when learning new artistic styles, or personalizing to specific objects. We demonstrate this phenomenon across text-to-image (T2I) diffusion models and introduce margin-aware preference optimization (MaPO), a reference-agnostic approach that breaks free from this constraint. By directly optimizing the likelihood margin between preferred and dispreferred outputs under the Bradley-Terry model without anchoring to a reference, MaPO transforms diverse T2I tasks into unified pairwise preference optimization. We validate MaPO's versatility across five challenging domains: (1) safe generation, (2) style adaptation, (3) cultural representation, (4) personalization, and (5) general preference alignment. Our results reveal that MaPO's advantage grows dramatically with reference mismatch severity, outperforming both DPO and specialized methods like DreamBooth while reducing training time by 15%. MaPO thus emerges as a versatile and memory-efficient method for generic T2I adaptation tasks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-Image Generation | Pick-a-Pic v2 (test) | PickScore55.9 | 42 | |
| Text-to-Image Generation | Parti-Prompts (test) | Aesthetic Score72.4 | 21 | |
| Text-to-image generation evaluation | HPS v2 | HPS Score (Anime)28.39 | 18 | |
| Text-to-image generation evaluation | Pick-a-Pic unique v2 (val) | PickScore22.24 | 13 | |
| Aesthetic Quality Improvement | HPS v2 (test) | HPSv2 Score28.22 | 10 | |
| Aesthetic Quality Improvement | PartiPrompts v1 (test) | PickScore22.3 | 10 | |
| Text-to-Image Generation | HPD v2 (test) | PickScore50.8 | 10 | |
| Image Generation | Human Preference Evaluation 55 prompts | Votes197 | 6 |