Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

InPO: Inversion Preference Optimization with Reparametrized DDIM for Efficient Diffusion Model Alignment

About

Without using explicit reward, direct preference optimization (DPO) employs paired human preference data to fine-tune generative models, a method that has garnered considerable attention in large language models (LLMs). However, exploration of aligning text-to-image (T2I) diffusion models with human preferences remains limited. In comparison to supervised fine-tuning, existing methods that align diffusion model suffer from low training efficiency and subpar generation quality due to the long Markov chain process and the intractability of the reverse process. To address these limitations, we introduce DDIM-InPO, an efficient method for direct preference alignment of diffusion models. Our approach conceptualizes diffusion model as a single-step generative model, allowing us to fine-tune the outputs of specific latent variables selectively. In order to accomplish this objective, we first assign implicit rewards to any latent variable directly via a reparameterization technique. Then we construct an Inversion technique to estimate appropriate latent variables for preference optimization. This modification process enables the diffusion model to only fine-tune the outputs of latent variables that have a strong correlation with the preference dataset. Experimental results indicate that our DDIM-InPO achieves state-of-the-art performance with just 400 steps of fine-tuning, surpassing all preference aligning baselines for T2I diffusion models in human preference evaluation tasks.

Yunhong Lu, Qichao Wang, Hengyuan Cao, Xierui Wang, Xiaoyin Xu, Min Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationParti-Prompts (test)
Aesthetic Score74.63
21
Text-to-Image GenerationHPSv2 (test)
HPS0.9022
18
Text-to-Image Preference EvaluationHPD v2 (test)
Aesthetic (Mean)6.182
14
Automatic preference evaluationPick-a-Pic v2 (test)
Aesthetic Score Median6.0372
9
Automatic preference evaluationParti-Prompts
Aesthetic Score (Median)5.6056
5
Showing 5 of 5 rows

Other info

Code

Follow for update