Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Diffusion Alignment as Variational Expectation-Maximization

About

Diffusion alignment aims to optimize diffusion models for the downstream objective. While existing methods based on reinforcement learning or direct backpropagation achieve considerable success in maximizing rewards, they often suffer from reward over-optimization and mode collapse. We introduce Diffusion Alignment as Variational Expectation-Maximization (DAV), a framework that formulates diffusion alignment as an iterative process alternating between two complementary phases: the E-step and the M-step. In the E-step, we employ test-time search to generate diverse and reward-aligned samples. In the M-step, we refine the diffusion model using samples discovered by the E-step. We demonstrate that DAV can optimize reward while preserving diversity for both continuous and discrete tasks: text-to-image synthesis and DNA sequence design. Our code is available at https://github.com/Jaewoopudding/dav.

Jaewoo Lee, Minsu Kim, Sanghyeok Choi, Inhyuck Song, Sujin Yun, Hyeongyu Kang, Woocheol Shin, Taeyoung Yun, Kiyoung Om, Jinkyoo Park• 2025

Related benchmarks

TaskDatasetResultRank
DNA sequence designEnhancer dataset (held-out evaluation)
Pred-activity924
9
Text-to-Image Synthesis40 animal prompts Stable Diffusion v1.5 (test)
Aesthetic Score9.18
9
Showing 2 of 2 rows

Other info

Follow for update