Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

KD-DLGAN: Data Limited Image Generation via Knowledge Distillation

About

Generative Adversarial Networks (GANs) rely heavily on large-scale training data for training high-quality image generation models. With limited training data, the GAN discriminator often suffers from severe overfitting which directly leads to degraded generation especially in generation diversity. Inspired by the recent advances in knowledge distillation (KD), we propose KD-DLGAN, a knowledge-distillation based generation framework that introduces pre-trained vision-language models for training effective data-limited generation models. KD-DLGAN consists of two innovative designs. The first is aggregated generative KD that mitigates the discriminator overfitting by challenging the discriminator with harder learning tasks and distilling more generalizable knowledge from the pre-trained models. The second is correlated generative KD that improves the generation diversity by distilling and preserving the diverse image-text correlation within the pre-trained models. Extensive experiments over multiple benchmarks show that KD-DLGAN achieves superior image generation with limited training data. In addition, KD-DLGAN complements the state-of-the-art with consistent and substantial performance gains.

Kaiwen Cui, Yingchen Yu, Fangneng Zhan, Shengcai Liao, Shijian Lu1, Eric Xing• 2023

Related benchmarks

TaskDatasetResultRank
Image GenerationCIFAR-100 (20% data)--
41
Image GenerationCIFAR-100 (10% data)--
41
Image GenerationCIFAR-10 (20% data)--
35
Image GenerationCIFAR-10 (10% data)--
35
Image GenerationCIFAR-100 (full data)--
35
Image GenerationCIFAR-10 100% data--
30
Few-shot Image GenerationGrumpy Cat 100-shot
FID19.65
26
Few-shot Image GenerationObama 100-shot
FID29.38
26
Image GenerationAnimalFace Dog
FID50.22
21
Image GenerationImageNet 25% data
IS14.65
16
Showing 10 of 18 rows

Other info

Follow for update