Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Tackling the Generative Learning Trilemma with Denoising Diffusion GANs

About

A wide variety of deep generative models has been developed in the past decade. Yet, these models often struggle with simultaneously addressing three key requirements including: high sample quality, mode coverage, and fast sampling. We call the challenge imposed by these requirements the generative learning trilemma, as the existing models often trade some of them for others. Particularly, denoising diffusion models have shown impressive sample quality and diversity, but their expensive sampling does not yet allow them to be applied in many real-world applications. In this paper, we argue that slow sampling in these models is fundamentally attributed to the Gaussian assumption in the denoising step which is justified only for small step sizes. To enable denoising with large steps, and hence, to reduce the total number of denoising steps, we propose to model the denoising distribution using a complex multimodal distribution. We introduce denoising diffusion generative adversarial networks (denoising diffusion GANs) that model each denoising step using a multimodal conditional GAN. Through extensive evaluations, we show that denoising diffusion GANs obtain sample quality and diversity competitive with original diffusion models while being 2000$\times$ faster on the CIFAR-10 dataset. Compared to traditional GANs, our model exhibits better mode coverage and sample diversity. To the best of our knowledge, denoising diffusion GAN is the first model that reduces sampling cost in diffusion models to an extent that allows them to be applied to real-world applications inexpensively. Project page and code can be found at https://nvlabs.github.io/denoising-diffusion-gan

Zhisheng Xiao, Karsten Kreis, Arash Vahdat• 2021

Related benchmarks

TaskDatasetResultRank
Image GenerationCIFAR-10 (test)
FID3.75
471
Unconditional Image GenerationCIFAR-10
FID4.15
171
Image GenerationCIFAR10 32x32 (test)
FID3.75
154
Unconditional GenerationCIFAR-10 (test)
FID3.75
102
Image GenerationCIFAR-10
FID14.6
95
Image GenerationCIFAR-10 (train/test)
FID3.75
78
Image GenerationSTL-10 (test)--
59
Image GenerationLSUN Church 256x256 (test)
FID5.23
55
Image GenerationCelebA-HQ 256x256
FID7.64
51
Image GenerationStacked MNIST
Modes1.00e+3
32
Showing 10 of 17 rows

Other info

Code

Follow for update