Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Least Squares Generative Adversarial Networks

About

Unsupervised learning with generative adversarial networks (GANs) has proven hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator. We show that minimizing the objective function of LSGAN yields minimizing the Pearson $\chi^2$ divergence. There are two benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher quality images than regular GANs. Second, LSGANs perform more stable during the learning process. We evaluate LSGANs on five scene datasets and the experimental results show that the images generated by LSGANs are of better quality than the ones generated by regular GANs. We also conduct two comparison experiments between LSGANs and regular GANs to illustrate the stability of LSGANs.

Xudong Mao, Qing Li, Haoran Xie, Raymond Y.K. Lau, Zhen Wang, Stephen Paul Smolley• 2016

Related benchmarks

TaskDatasetResultRank
Image GenerationCIFAR-10 (test)
FID66.686
471
Image GenerationCIFAR-10
Inception Score9.05
178
Image GenerationCelebA
FID53.9
110
Image GenerationCIFAR100
FID12.43
51
Image GenerationCIFAR-100 (10% data)
Inception Score7.02
41
Image GenerationCIFAR-100 (20% data)
IS8.94
41
Image GenerationCIFAR-10 (20% data)
Inception Score8.5
35
Image GenerationCIFAR-10 (10% data)
Inception Score7.33
35
Image GenerationCIFAR-100 (full data)
Inception Score10.75
35
Image GenerationTiny-ImageNet
Inception Score5.381
34
Showing 10 of 19 rows

Other info

Follow for update