Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ViTGAN: Training GANs with Vision Transformers

About

Recently, Vision Transformers (ViTs) have shown competitive performance on image recognition while requiring less vision-specific inductive biases. In this paper, we investigate if such performance can be extended to image generation. To this end, we integrate the ViT architecture into generative adversarial networks (GANs). For ViT discriminators, we observe that existing regularization methods for GANs interact poorly with self-attention, causing serious instability during training. To resolve this issue, we introduce several novel regularization techniques for training GANs with ViTs. For ViT generators, we examine architectural choices for latent and pixel mapping layers to facilitate convergence. Empirically, our approach, named ViTGAN, achieves comparable performance to the leading CNN-based GAN models on three datasets: CIFAR-10, CelebA, and LSUN bedroom.

Kwonjoon Lee, Huiwen Chang, Lu Jiang, Han Zhang, Zhuowen Tu, Ce Liu• 2021

Related benchmarks

TaskDatasetResultRank
Image GenerationCIFAR-10 (test)
FID6.66
471
Unconditional Image GenerationCIFAR-10 (test)
FID4.57
216
Unconditional Image GenerationCelebA unconditional 64 x 64
FID3.74
95
Image GenerationCIFAR-10
FID6.66
95
Image GenerationCIFAR-10 (train/test)
FID6.66
78
Image GenerationCIFAR-10 32x32
FID4.57
44
Image GenerationCIFAR-10
FID6.66
25
Unconditional Image GenerationLSUN bedroom 64x64
FID1.49
4
Unconditional Image GenerationLSUN bedroom 128x128
FID1.87
3
Image GenerationCelebA 64x64 19k samples
FID3.74
3
Showing 10 of 10 rows

Other info

Code

Follow for update