Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

GANSpace: Discovering Interpretable GAN Controls

About

This paper describes a simple technique to analyze Generative Adversarial Networks (GANs) and create interpretable controls for image synthesis, such as change of viewpoint, aging, lighting, and time of day. We identify important latent directions based on Principal Components Analysis (PCA) applied either in latent space or feature space. Then, we show that a large number of interpretable controls can be defined by layer-wise perturbation along the principal directions. Moreover, we show that BigGAN can be controlled with layer-wise inputs in a StyleGAN-like manner. We show results on different GANs trained on various datasets, and demonstrate good qualitative matches to edit directions found through earlier supervised approaches.

Erik H\"ark\"onen, Aaron Hertzmann, Jaakko Lehtinen, Sylvain Paris• 2020

Related benchmarks

TaskDatasetResultRank
Disentangled Representation LearningCars3D
FactorVAE0.932
35
Disentangled Representation LearningMPI3D
FactorVAE Score0.465
18
DisentanglementMPI3D
D0.229
18
Disentangled Representation LearningShapes3D
FactorVAE Score0.788
18
DisentanglementShapes3D
D0.284
18
DisentanglementCars3D
FVAE0.932
10
Controllable Image GenerationCelebA
Gender98
5
Facial Attribute EditingFFHQ Attributes
Gender Accuracy84.1
3
Image EditingStyleGAN2
FID7.91
3
Showing 9 of 9 rows

Other info

Follow for update