Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Diffusion Models Without Attention

About

In recent advancements in high-fidelity image generation, Denoising Diffusion Probabilistic Models (DDPMs) have emerged as a key player. However, their application at high resolutions presents significant computational challenges. Current methods, such as patchifying, expedite processes in UNet and Transformer architectures but at the expense of representational capacity. Addressing this, we introduce the Diffusion State Space Model (DiffuSSM), an architecture that supplants attention mechanisms with a more scalable state space model backbone. This approach effectively handles higher resolutions without resorting to global compression, thus preserving detailed image representation throughout the diffusion process. Our focus on FLOP-efficient architectures in diffusion training marks a significant step forward. Comprehensive evaluations on both ImageNet and LSUN datasets at two resolutions demonstrate that DiffuSSMs are on par or even outperform existing diffusion models with attention modules in FID and Inception Score metrics while significantly reducing total FLOP usage.

Jing Nathan Yan, Jiatao Gu, Alexander M. Rush• 2023

Related benchmarks

TaskDatasetResultRank
Class-conditional Image GenerationImageNet 256x256 (train)
IS259.1
305
Class-conditional Image GenerationImageNet 256x256 (train val)
FID2.28
178
Class-conditional Image GenerationImageNet 512x512
FID3.41
72
Image GenerationImageNet 512x512 (test)
FID3.41
57
Class-conditional Image GenerationImageNet 256x256 2012 (val)
FID2.28
38
Conditional Image GenerationImageNet 512x512 (val)
gFID3.41
30
Class-conditional Image GenerationImageNet 256x256 1k (train val)
FID2.28
17
Class-conditional Image GenerationImageNet 512x512 (train val)
FID3.41
16
Showing 8 of 8 rows

Other info

Follow for update