Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Adjoint Matching: Fine-tuning Flow and Diffusion Generative Models with Memoryless Stochastic Optimal Control

About

Dynamical generative models that produce samples through an iterative process, such as Flow Matching and denoising diffusion models, have seen widespread use, but there have not been many theoretically-sound methods for improving these models with reward fine-tuning. In this work, we cast reward fine-tuning as stochastic optimal control (SOC). Critically, we prove that a very specific memoryless noise schedule must be enforced during fine-tuning, in order to account for the dependency between the noise variable and the generated samples. We also propose a new algorithm named Adjoint Matching which outperforms existing SOC algorithms, by casting SOC problems as a regression problem. We find that our approach significantly improves over existing methods for reward fine-tuning, achieving better consistency, realism, and generalization to unseen human preference reward models, while retaining sample diversity.

Carles Domingo-Enrich, Michal Drozdzal, Brian Karrer, Ricky T. Q. Chen• 2024

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningOGBench
Overall Score35
21
Text-to-Image GenerationStable Diffusion Alignment Prompts 1.5 (test)
ImageReward0.7873
8
Text-to-Image AlignmentHPS v2
Reward3.59
6
Text-to-Image AlignmentPickScore
Reward22.78
6
Text-to-Image AlignmentAesthetic Score
Reward6.87
6
Reward MaximizationIllustrative Setting Novelty-seeking reward maximization
SQ_beta56.7
4
Novelty-seeking molecular design for Energy maximizationFlowMol
E[r(x)]29.1
3
Conservative Manifold ExplorationConservative Manifold Exploration
Expected r(x)35.08
3
Expected rewards maximization under optimal transport distance regularizationIllustrative Synthetic Environment v1 (test)
Expected Reward E[r(x)]35
3
Molecular DesignQM9
E[r(x)]29.1
3
Showing 10 of 10 rows

Other info

Follow for update