Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning Fast Samplers for Diffusion Models by Differentiating Through Sample Quality

About

Diffusion models have emerged as an expressive family of generative models rivaling GANs in sample quality and autoregressive models in likelihood scores. Standard diffusion models typically require hundreds of forward passes through the model to generate a single high-fidelity sample. We introduce Differentiable Diffusion Sampler Search (DDSS): a method that optimizes fast samplers for any pre-trained diffusion model by differentiating through sample quality scores. We also present Generalized Gaussian Diffusion Models (GGDM), a family of flexible non-Markovian samplers for diffusion models. We show that optimizing the degrees of freedom of GGDM samplers by maximizing sample quality scores via gradient descent leads to improved sample quality. Our optimization procedure backpropagates through the sampling process using the reparametrization trick and gradient rematerialization. DDSS achieves strong results on unconditional image generation across various datasets (e.g., FID scores on LSUN church 128x128 of 11.6 with only 10 inference steps, and 4.82 with 20 steps, compared to 51.1 and 14.9 with strongest DDPM/DDIM baselines). Our method is compatible with any pre-trained diffusion model without fine-tuning or re-training required.

Daniel Watson, William Chan, Jonathan Ho, Mohammad Norouzi• 2022

Related benchmarks

TaskDatasetResultRank
Unconditional Image GenerationCIFAR-10 (test)
FID4.25
216
Unconditional Image GenerationLSUN Bedrooms unconditional
FID4.82
96
Image GenerationLSUN Church-Outdoor unconditional (test)
FID6.74
75
Showing 3 of 3 rows

Other info

Follow for update