Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Diverse Image Generation via Self-Conditioned GANs

About

We introduce a simple but effective unsupervised method for generating realistic and diverse images. We train a class-conditional GAN model without using manually annotated class labels. Instead, our model is conditional on labels automatically derived from clustering in the discriminator's feature space. Our clustering step automatically discovers diverse modes, and explicitly requires the generator to cover them. Experiments on standard mode collapse benchmarks show that our method outperforms several competing methods when addressing mode collapse. Our method also performs well on large-scale datasets such as ImageNet and Places365, improving both image diversity and standard quality metrics, compared to previous methods.

Steven Liu, Tongzhou Wang, David Bau, Jun-Yan Zhu, Antonio Torralba• 2020

Related benchmarks

TaskDatasetResultRank
Image GenerationImageNet (val)
FID41.7
198
Image GenerationCIFAR-10
Inception Score7.72
178
Image GenerationStacked MNIST
Modes1.00e+3
32
Image GenerationImageNet 1k (train)
FID40.3
29
Image GenerationImageNet ILSVRC 128x128 2012 (test)
FID40.3
18
Image GenerationImageNet (train val)
Precision66.3
17
Unconditional Image GenerationImageNet 128x128 (train)
FID40.3
9
Controllable Image GenerationCelebA
Gender95
5
Showing 8 of 8 rows

Other info

Follow for update