Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DiffiT: Diffusion Vision Transformers for Image Generation

About

Diffusion models with their powerful expressivity and high sample quality have achieved State-Of-The-Art (SOTA) performance in the generative domain. The pioneering Vision Transformer (ViT) has also demonstrated strong modeling capabilities and scalability, especially for recognition tasks. In this paper, we study the effectiveness of ViTs in diffusion-based generative learning and propose a new model denoted as Diffusion Vision Transformers (DiffiT). Specifically, we propose a methodology for finegrained control of the denoising process and introduce the Time-dependant Multihead Self Attention (TMSA) mechanism. DiffiT is surprisingly effective in generating high-fidelity images with significantly better parameter efficiency. We also propose latent and image space DiffiT models and show SOTA performance on a variety of class-conditional and unconditional synthesis tasks at different resolutions. The Latent DiffiT model achieves a new SOTA FID score of 1.73 on ImageNet256 dataset while having 19.85%, 16.88% less parameters than other Transformer-based diffusion models such as MDT and DiT,respectively. Code: https://github.com/NVlabs/DiffiT

Ali Hatamizadeh, Jiaming Song, Guilin Liu, Jan Kautz, Arash Vahdat• 2023

Related benchmarks

TaskDatasetResultRank
Class-conditional Image GenerationImageNet 256x256
Inception Score (IS)276.5
441
Class-conditional Image GenerationImageNet 256x256 (train)
IS276.5
305
Image GenerationImageNet 256x256
FID1.73
243
Image GenerationImageNet 512x512 (val)
FID-50K2.67
184
Class-conditional Image GenerationImageNet 256x256 (train val)
FID1.73
178
Image GenerationCIFAR10 32x32 (test)
FID1.95
154
Image GenerationImageNet 256x256 (train)
FID1.73
91
Conditional Image GenerationImageNet-1K 256x256 (val)
gFID1.73
86
Class-conditional Image GenerationImageNet 512x512
FID2.67
72
Image GenerationFFHQ 64x64 (test)
FID2.22
69
Showing 10 of 17 rows

Other info

Code

Follow for update