Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Manifold Preserving Guided Diffusion

About

Despite the recent advancements, conditional image generation still faces challenges of cost, generalizability, and the need for task-specific training. In this paper, we propose Manifold Preserving Guided Diffusion (MPGD), a training-free conditional generation framework that leverages pretrained diffusion models and off-the-shelf neural networks with minimal additional inference cost for a broad range of tasks. Specifically, we leverage the manifold hypothesis to refine the guided diffusion steps and introduce a shortcut algorithm in the process. We then propose two methods for on-manifold training-free guidance using pre-trained autoencoders and demonstrate that our shortcut inherently preserves the manifolds when applied to latent diffusion models. Our experiments show that MPGD is efficient and effective for solving a variety of conditional generation applications in low-compute settings, and can consistently offer up to 3.8x speed-ups with the same number of diffusion steps while maintaining high sample quality compared to the baselines.

Yutong He, Naoki Murata, Chieh-Hsin Lai, Yuhta Takida, Toshimitsu Uesaka, Dongjun Kim, Wei-Hsiang Liao, Yuki Mitsufuji, J. Zico Kolter, Ruslan Salakhutdinov, Stefano Ermon• 2023

Related benchmarks

TaskDatasetResultRank
Class-conditional Image GenerationImageNet
FID239
158
Conditional Image GenerationCIFAR-10
FID88
77
4x super-resolutionFFHQ 256x256
PSNR24.01
33
Super-Resolution (4x)ImageNet
PSNR23.93
30
Inpaint (box)ImageNet
PSNR22.76
26
Gaussian deblurFFHQ 256x256
PSNR24.42
25
Super-Resolution (4x)Cats
LPIPS0.09
14
Gaussian Deblur 3Cats
LPIPS0.14
14
Gaussian Deblur 12Cats
LPIPS0.32
14
Gaussian DeblurringImageNet Gaussian Blur sigma=3
LPIPS0.23
14
Showing 10 of 34 rows

Other info

Follow for update