Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Flow Density Control: Generative Optimization Beyond Entropy-Regularized Fine-Tuning

About

Adapting large-scale foundation flow and diffusion generative models to optimize task-specific objectives while preserving prior information is crucial for real-world applications such as molecular design, protein docking, and creative image generation. Existing principled fine-tuning methods aim to maximize the expected reward of generated samples, while retaining knowledge from the pre-trained model via KL-divergence regularization. In this work, we tackle the significantly more general problem of optimizing general utilities beyond average rewards, including risk-averse and novelty-seeking reward maximization, diversity measures for exploration, and experiment design objectives among others. Likewise, we consider more general ways to preserve prior information beyond KL-divergence, such as optimal transport distances and Renyi divergences. To this end, we introduce Flow Density Control (FDC), a simple algorithm that reduces this complex problem to a specific sequence of simpler fine-tuning tasks, each solvable via scalable established methods. We derive convergence guarantees for the proposed scheme under realistic assumptions by leveraging recent understanding of mirror flows. Finally, we validate our method on illustrative settings, text-to-image, and molecular design tasks, showing that it can steer pre-trained generative models to optimize objectives and solve practically relevant tasks beyond the reach of current fine-tuning schemes.

Riccardo De Santi, Marin Vlastelica, Ya-Ping Hsieh, Zebang Shen, Niao He, Andreas Krause• 2025

Related benchmarks

TaskDatasetResultRank
Reward MaximizationIllustrative Setting Novelty-seeking reward maximization
SQ_beta452.5
4
Text-to-Image GenerationText-to-image Prompt: 'A creative bridge design'
Vendi Score2.47
4
Conservative Manifold ExplorationConservative Manifold Exploration
Expected r(x)35.38
3
Expected rewards maximization under optimal transport distance regularizationIllustrative Synthetic Environment v1 (test)
Expected Reward E[r(x)]35.4
3
Manifold ExplorationSynthetic 2D Manifold
H(p^pi)7.14
3
Molecular DesignQM9
E[r(x)]27.5
3
Novelty-seeking molecular design for Energy maximizationFlowMol
E[r(x)]27.5
3
Showing 7 of 7 rows

Other info

Follow for update