Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Consistency Regularization for Generative Adversarial Networks

About

Generative Adversarial Networks (GANs) are known to be difficult to train, despite considerable research effort. Several regularization techniques for stabilizing training have been proposed, but they introduce non-trivial computational overheads and interact poorly with existing techniques like spectral normalization. In this work, we propose a simple, effective training stabilizer based on the notion of consistency regularization---a popular technique in the semi-supervised learning literature. In particular, we augment data passing into the GAN discriminator and penalize the sensitivity of the discriminator to these augmentations. We conduct a series of experiments to demonstrate that consistency regularization works effectively with spectral normalization and various GAN architectures, loss functions and optimizer settings. Our method achieves the best FID scores for unconditional image generation compared to other regularization methods on CIFAR-10 and CelebA. Moreover, Our consistency regularized GAN (CR-GAN) improves state-of-the-art FID scores for conditional generation from 14.73 to 11.48 on CIFAR-10 and from 8.73 to 6.66 on ImageNet-2012.

Han Zhang, Zizhao Zhang, Augustus Odena, Honglak Lee• 2019

Related benchmarks

TaskDatasetResultRank
Image GenerationCIFAR-10 (test)
FID11.48
471
Image GenerationCIFAR-10
Inception Score9.17
178
Class-conditional Image GenerationImageNet
FID6.66
132
Conditional Image GenerationCIFAR10 (test)
Fréchet Inception Distance11.48
66
Image GenerationCIFAR-10 unconditional (test)
FID18.72
39
Conditional Image GenerationCIFAR-10 class-conditional
FID11.48
29
Image GenerationCIFAR-100 (train)
FID11.26
20
Image GenerationOxford-Dog (train)
FID48.73
10
Image GenerationFFHQ 2.5k (train)
FID41.43
10
Image GenerationMetFaces (train)
FID48.89
10
Showing 10 of 21 rows

Other info

Code

Follow for update