Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Continuous Conditional Generative Adversarial Networks: Novel Empirical Losses and Label Input Mechanisms

About

This work proposes the continuous conditional generative adversarial network (CcGAN), the first generative model for image generation conditional on continuous, scalar conditions (termed regression labels). Existing conditional GANs (cGANs) are mainly designed for categorical conditions (eg, class labels); conditioning on regression labels is mathematically distinct and raises two fundamental problems:(P1) Since there may be very few (even zero) real images for some regression labels, minimizing existing empirical versions of cGAN losses (aka empirical cGAN losses) often fails in practice;(P2) Since regression labels are scalar and infinitely many, conventional label input methods are not applicable. The proposed CcGAN solves the above problems, respectively, by (S1) reformulating existing empirical cGAN losses to be appropriate for the continuous scenario; and (S2) proposing a naive label input (NLI) method and an improved label input (ILI) method to incorporate regression labels into the generator and the discriminator. The reformulation in (S1) leads to two novel empirical discriminator losses, termed the hard vicinal discriminator loss (HVDL) and the soft vicinal discriminator loss (SVDL) respectively, and a novel empirical generator loss. The error bounds of a discriminator trained with HVDL and SVDL are derived under mild assumptions in this work. Two new benchmark datasets (RC-49 and Cell-200) and a novel evaluation metric (Sliding Fr\'echet Inception Distance) are also proposed for this continuous scenario. Our experiments on the Circular 2-D Gaussians, RC-49, UTKFace, Cell-200, and Steering Angle datasets show that CcGAN is able to generate diverse, high-quality samples from the image distribution conditional on a given regression label. Moreover, in these experiments, CcGAN substantially outperforms cGAN both visually and quantitatively.

Xin Ding, Yongwei Wang, Zuheng Xu, William J. Welch, Z. Jane Wang• 2020

Related benchmarks

TaskDatasetResultRank
Conditional Image GenerationUTKFace 64x64 (test)
SFID0.413
10
Conditional Image GenerationSteering Angle 64x64 (test)
SFID1.334
10
Conditional Image GenerationUTKFace 128x128 (test)
SFID0.367
10
Conditional Image GenerationRC-49 64x64 (test)
SFID0.126
10
Conditional Image GenerationSteering Angle 128x128 (test)
SFID1.689
10
Image GenerationRC-49
Intra-FID0.086
9
Image GenerationRC-49 (test)
Intra-FID0.389
6
Image GenerationUTKFace (test)
Intra-FID0.425
6
Image GenerationCell-200 (test)
Intra-FID7.266
6
Image GenerationSteering Angle (test)
Intra-FID1.546
6
Showing 10 of 16 rows

Other info

Code

Follow for update