StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators
About
Can a generative model be trained to produce images from a specific domain, guided by a text prompt only, without seeing any image? In other words: can an image generator be trained "blindly"? Leveraging the semantic power of large scale Contrastive-Language-Image-Pre-training (CLIP) models, we present a text-driven method that allows shifting a generative model to new domains, without having to collect even a single image. We show that through natural language prompts and a few minutes of training, our method can adapt a generator across a multitude of domains characterized by diverse styles and shapes. Notably, many of these modifications would be difficult or outright impossible to reach with existing methods. We conduct an extensive set of experiments and comparisons across a wide range of domains. These demonstrate the effectiveness of our approach and show that our shifted models maintain the latent-space properties that make generative models appealing for downstream tasks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| 3D Head Stylization | FFHQ (test) | FID121 | 9 | |
| 3D Head Stylization | RenderMe360 (test) | FID175.8 | 9 | |
| Tanned Facial Attribute Editing | CelebA-HQ (test) | Sdir0.166 | 8 | |
| Sad Facial Attribute Editing | CelebA-HQ (test) | Sdir0.161 | 8 | |
| Smiling Facial Attribute Editing | CelebA-HQ (test) | Sdir0.16 | 8 | |
| Text-driven Style Transfer | Custom Stylized Images 10 text conditions (test) | CLIP Score0.2252 | 7 | |
| Text-Guided Image Manipulation | Human Face images with 10 text conditions (test) | Style Score2.66 | 7 | |
| Real image conditioned semantic editing | CelebA-HQ-256 (test) | Sdir0.16 | 5 | |
| Domain Adaptation | FFHQ One-shot Domain Adaptation 1.0 (val) | KID (Amedeo Modigliani)131 | 4 | |
| Domain Adaptation | FFHQ to Amedeo Modigliani | FID188.4 | 4 |