Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Bayesian GAN

About

Generative adversarial networks (GANs) can implicitly learn rich distributions over images, audio, and data which are hard to model with an explicit likelihood. We present a practical Bayesian formulation for unsupervised and semi-supervised learning with GANs. Within this framework, we use stochastic gradient Hamiltonian Monte Carlo to marginalize the weights of the generator and discriminator networks. The resulting approach is straightforward and obtains good performance without any standard interventions such as feature matching, or mini-batch discrimination. By exploring an expressive posterior over the parameters of the generator, the Bayesian GAN avoids mode-collapse, produces interpretable and diverse candidate samples, and provides state-of-the-art quantitative results for semi-supervised learning on benchmarks including SVHN, CelebA, and CIFAR-10, outperforming DCGAN, Wasserstein GANs, and DCGAN ensembles.

Yunus Saatchi, Andrew Gordon Wilson• 2017

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)--
3518
ClassificationCIFAR10 (test)
Accuracy97.32
266
Semi-supervised Image ClassificationCIFAR100 (test)--
23
Semi-supervised Image ClassificationCIFAR10 (test)
Accuracy81.74
8
Semi-supervised Image ClassificationSVHN (test)
Accuracy92.63
8
Showing 5 of 5 rows

Other info

Follow for update