Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators

About

Can a generative model be trained to produce images from a specific domain, guided by a text prompt only, without seeing any image? In other words: can an image generator be trained "blindly"? Leveraging the semantic power of large scale Contrastive-Language-Image-Pre-training (CLIP) models, we present a text-driven method that allows shifting a generative model to new domains, without having to collect even a single image. We show that through natural language prompts and a few minutes of training, our method can adapt a generator across a multitude of domains characterized by diverse styles and shapes. Notably, many of these modifications would be difficult or outright impossible to reach with existing methods. We conduct an extensive set of experiments and comparisons across a wide range of domains. These demonstrate the effectiveness of our approach and show that our shifted models maintain the latent-space properties that make generative models appealing for downstream tasks.

Rinon Gal, Or Patashnik, Haggai Maron, Gal Chechik, Daniel Cohen-Or• 2021

Related benchmarks

TaskDatasetResultRank
3D Head StylizationFFHQ (test)
FID121
9
3D Head StylizationRenderMe360 (test)
FID175.8
9
Tanned Facial Attribute EditingCelebA-HQ (test)
Sdir0.166
8
Sad Facial Attribute EditingCelebA-HQ (test)
Sdir0.161
8
Smiling Facial Attribute EditingCelebA-HQ (test)
Sdir0.16
8
Text-driven Style TransferCustom Stylized Images 10 text conditions (test)
CLIP Score0.2252
7
Text-Guided Image ManipulationHuman Face images with 10 text conditions (test)
Style Score2.66
7
Real image conditioned semantic editingCelebA-HQ-256 (test)
Sdir0.16
5
Domain AdaptationFFHQ One-shot Domain Adaptation 1.0 (val)
KID (Amedeo Modigliani)131
4
Domain AdaptationFFHQ to Amedeo Modigliani
FID188.4
4
Showing 10 of 19 rows

Other info

Follow for update