Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Improving GANs with A Dynamic Discriminator

About

Discriminator plays a vital role in training generative adversarial networks (GANs) via distinguishing real and synthesized samples. While the real data distribution remains the same, the synthesis distribution keeps varying because of the evolving generator, and thus effects a corresponding change to the bi-classification task for the discriminator. We argue that a discriminator with an on-the-fly adjustment on its capacity can better accommodate such a time-varying task. A comprehensive empirical study confirms that the proposed training strategy, termed as DynamicD, improves the synthesis performance without incurring any additional computation cost or training objectives. Two capacity adjusting schemes are developed for training GANs under different data regimes: i) given a sufficient amount of training data, the discriminator benefits from a progressively increased learning capacity, and ii) when the training data is limited, gradually decreasing the layer width mitigates the over-fitting issue of the discriminator. Experiments on both 2D and 3D-aware image synthesis tasks conducted on a range of datasets substantiate the generalizability of our DynamicD as well as its substantial improvement over the baselines. Furthermore, DynamicD is synergistic to other discriminator-improving approaches (including data augmentation, regularizers, and pre-training), and brings continuous performance gain when combined for learning GANs.

Ceyuan Yang, Yujun Shen, Yinghao Xu, Deli Zhao, Bo Dai, Bolei Zhou• 2022

Related benchmarks

TaskDatasetResultRank
Image GenerationLSUN church
FID3.87
95
Image GenerationFFHQ
FID3.53
22
Image GenerationLSUN bedroom
FID4.01
13
Image SynthesisFFHQ 2K 256 resolution
FID23.47
9
Image SynthesisFFHQ 0.1K 256 resolution
FID50.37
8
Image SynthesisFFHQ 140K 256 resolution
FID3.53
8
Showing 6 of 6 rows

Other info

Code

Follow for update