Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

One-Step Diffusion Distillation via Deep Equilibrium Models

About

Diffusion models excel at producing high-quality samples but naively require hundreds of iterations, prompting multiple attempts to distill the generation process into a faster network. However, many existing approaches suffer from a variety of challenges: the process for distillation training can be complex, often requiring multiple training stages, and the resulting models perform poorly when utilized in single-step generative applications. In this paper, we introduce a simple yet effective means of distilling diffusion models directly from initial noise to the resulting image. Of particular importance to our approach is to leverage a new Deep Equilibrium (DEQ) model as the distilled architecture: the Generative Equilibrium Transformer (GET). Our method enables fully offline training with just noise/image pairs from the diffusion model while achieving superior performance compared to existing one-step methods on comparable training budgets. We demonstrate that the DEQ architecture is crucial to this capability, as GET matches a $5\times$ larger ViT in terms of FID scores while striking a critical balance of computational cost and image quality. Code, checkpoints, and datasets are available.

Zhengyang Geng, Ashwini Pokle, J. Zico Kolter• 2023

Related benchmarks

TaskDatasetResultRank
Unconditional Image GenerationCIFAR-10 unconditional
FID6.91
159
Conditional Image GenerationCIFAR-10
FID6.25
71
Conditional Image GenerationCIFAR10 (test)
Fréchet Inception Distance6.25
66
Image GenerationCIFAR-10 unconditional (test)
FID5.49
39
Showing 4 of 4 rows

Other info

Follow for update