Parallel Sampling of Diffusion Models
About
Diffusion models are powerful generative models but suffer from slow sampling, often taking 1000 sequential denoising steps for one sample. As a result, considerable efforts have been directed toward reducing the number of denoising steps, but these methods hurt sample quality. Instead of reducing the number of denoising steps (trading quality for speed), in this paper we explore an orthogonal approach: can we run the denoising steps in parallel (trading compute for speed)? In spite of the sequential nature of the denoising steps, we show that surprisingly it is possible to parallelize sampling via Picard iterations, by guessing the solution of future denoising steps and iteratively refining until convergence. With this insight, we present ParaDiGMS, a novel method to accelerate the sampling of pretrained diffusion models by denoising multiple steps in parallel. ParaDiGMS is the first diffusion sampling method that enables trading compute for speed and is even compatible with existing fast sampling techniques such as DDIM and DPMSolver. Using ParaDiGMS, we improve sampling speed by 2-4x across a range of robotics and image generation models, giving state-of-the-art sampling speeds of 0.2s on 100-step DiffusionPolicy and 14.6s on 1000-step StableDiffusion-v2 with no measurable degradation of task reward, FID score, or CLIP score.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Unconditional Image Generation | CIFAR-10 32x32 (test) | FID6.19 | 94 | |
| Conditional Image Generation | ImageNet 64x64 (val) | FID6.38 | 48 | |
| Unconditional Image Generation | FFHQ 64x64 (val) | FID8.81 | 44 | |
| Image Generation | LDM-CelebA | FID36.19 | 20 | |
| Unconditional Image Generation | CIFAR-10 unconditional 32x32 | FID9.43 | 17 | |
| Robotic Control | PushT | Time (s)0.71 | 14 | |
| Image Generation | Stable Diffusion (SD) 1.4 | CLIP Score26.34 | 12 | |
| Image Generation | COCO Captions 2014 (val) | LPIPS (w/ G.T.)0.8 | 11 |