Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DigGAN: Discriminator gradIent Gap Regularization for GAN Training with Limited Data

About

Generative adversarial nets (GANs) have been remarkably successful at learning to sample from distributions specified by a given dataset, particularly if the given dataset is reasonably large compared to its dimensionality. However, given limited data, classical GANs have struggled, and strategies like output-regularization, data-augmentation, use of pre-trained models and pruning have been shown to lead to improvements. Notably, the applicability of these strategies is 1) often constrained to particular settings, e.g., availability of a pretrained GAN; or 2) increases training time, e.g., when using pruning. In contrast, we propose a Discriminator gradIent Gap regularized GAN (DigGAN) formulation which can be added to any existing GAN. DigGAN augments existing GANs by encouraging to narrow the gap between the norm of the gradient of a discriminator's prediction w.r.t.\ real images and w.r.t.\ the generated samples. We observe this formulation to avoid bad attractors within the GAN loss landscape, and we find DigGAN to significantly improve the results of GAN training when limited data is available. Code is available at \url{https://github.com/AilsaF/DigGAN}.

Tiantian Fang, Ruoyu Sun, Alex Schwing• 2022

Related benchmarks

TaskDatasetResultRank
Image GenerationCIFAR100
FID11.63
51
Image GenerationCIFAR-100 (10% data)
Inception Score9.06
41
Image GenerationCIFAR-100 (20% data)
IS9.54
41
Image GenerationCIFAR-100 (full data)
Inception Score11.45
35
Image GenerationCIFAR-10 (10% data)
Inception Score8.32
35
Image GenerationCIFAR-10 (20% data)
Inception Score8.89
35
Image GenerationCIFAR-10 100% data
IS9.28
30
Few-shot Image GenerationGrumpy Cat 100-shot
FID26.75
26
Few-shot Image GenerationObama 100-shot
FID41.34
26
Image GenerationAnimalFace Dog
FID59
21
Showing 10 of 21 rows

Other info

Code

Follow for update