Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Regularizing Generative Adversarial Networks under Limited Data

About

Recent years have witnessed the rapid progress of generative adversarial networks (GANs). However, the success of the GAN models hinges on a large amount of training data. This work proposes a regularization approach for training robust GAN models on limited data. We theoretically show a connection between the regularized loss and an f-divergence called LeCam-divergence, which we find is more robust under limited training data. Extensive experiments on several benchmark datasets demonstrate that the proposed regularization scheme 1) improves the generalization performance and stabilizes the learning dynamics of GAN models under limited training data, and 2) complements the recent data augmentation methods. These properties facilitate training GAN models to achieve state-of-the-art performance when only limited training data of the ImageNet benchmark is available.

Hung-Yu Tseng, Lu Jiang, Ce Liu, Ming-Hsuan Yang, Weilong Yang• 2021

Related benchmarks

TaskDatasetResultRank
Image GenerationCIFAR-10
Inception Score9.31
178
Image GenerationCIFAR100
FID11.84
51
Image GenerationCIFAR-100 (10% data)
Inception Score9.17
41
Image GenerationCIFAR-100 (20% data)
IS10.12
41
Image GenerationCIFAR-10 (10% data)
Inception Score8.81
35
Image GenerationCIFAR-10 (20% data)
Inception Score9.01
35
Image GenerationCIFAR-100 (full data)
Inception Score11.41
35
Image GenerationCIFAR-10 100% data
IS9.45
30
Image GenerationObama 100-shot (train)
FID33.16
28
Image GenerationGrumpy cat 100-shot (train)
FID24.93
28
Showing 10 of 41 rows

Other info

Code

Follow for update