Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VAE with a VampPrior

About

Many different methods to train deep generative models have been introduced in the past. In this paper, we propose to extend the variational auto-encoder (VAE) framework with a new type of prior which we call "Variational Mixture of Posteriors" prior, or VampPrior for short. The VampPrior consists of a mixture distribution (e.g., a mixture of Gaussians) with components given by variational posteriors conditioned on learnable pseudo-inputs. We further extend this prior to a two layer hierarchical model and show that this architecture with a coupled prior and posterior, learns significantly better models. The model also avoids the usual local optima issues related to useless latent dimensions that plague VAEs. We provide empirical studies on six datasets, namely, static and binary MNIST, OMNIGLOT, Caltech 101 Silhouettes, Frey Faces and Histopathology patches, and show that applying the hierarchical VampPrior delivers state-of-the-art results on all datasets in the unsupervised permutation invariant setting and the best results or comparable to SOTA methods for the approach with convolutional networks.

Jakub M. Tomczak, Max Welling• 2017

Related benchmarks

TaskDatasetResultRank
Generative ModelingMNIST (test)--
35
Image ModelingOmniglot (test)
NLL89.76
27
Density EstimationOMNIGLOT dynamically binarized (test)
NLL89.76
16
Image GenerationSVHN latent dimension 16 (test)
FID91.98
13
Image GenerationMNIST latent dimension 16 (test)
FID34.02
13
Image GenerationCIFAR 10 latent dimension 32 (test)
FID198.1
13
Image GenerationCELEBA latent dimension 64 (test)
FID73.87
13
Generative ModelingDynamically binarized MNIST (test)--
13
Generative ModelingMNIST--
10
ClusteringMNIST (train+val)
Utilized Clusters100
8
Showing 10 of 15 rows

Other info

Follow for update