NoisyTwins: Class-Consistent and Diverse Image Generation through StyleGANs
About
StyleGANs are at the forefront of controllable image generation as they produce a latent space that is semantically disentangled, making it suitable for image editing and manipulation. However, the performance of StyleGANs severely degrades when trained via class-conditioning on large-scale long-tailed datasets. We find that one reason for degradation is the collapse of latents for each class in the $\mathcal{W}$ latent space. With NoisyTwins, we first introduce an effective and inexpensive augmentation strategy for class embeddings, which then decorrelates the latents based on self-supervision in the $\mathcal{W}$ space. This decorrelation mitigates collapse, ensuring that our method preserves intra-class diversity with class-consistency in image generation. We show the effectiveness of our approach on large-scale real-world long-tailed datasets of ImageNet-LT and iNaturalist 2019, where our method outperforms other methods by $\sim 19\%$ on FID, establishing a new state-of-the-art.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Class-conditional Image Generation | ImageNet-LT (val) | FID21.29 | 15 | |
| Image Generation | AnimalFace | FID16.15 | 11 | |
| Image Generation | ImageNet Carnivore | FID13.65 | 6 | |
| Image Generation | CIFAR10-LT | FID17.74 | 5 | |
| Image Generation | iNaturalist 2019 (val) | FID11.46 | 5 | |
| Class-conditional Image Generation | iNaturalist 2019 (val) | FID11.46 | 4 |