Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Importance Weighted Autoencoders

About

The variational autoencoder (VAE; Kingma, Welling (2014)) is a recently proposed generative model pairing a top-down generative network with a bottom-up recognition network which approximates posterior inference. It typically makes strong assumptions about posterior inference, for instance that the posterior distribution is approximately factorial, and that its parameters can be approximated with nonlinear regression from the observations. As we show empirically, the VAE objective can lead to overly simplified representations which fail to use the network's entire modeling capacity. We present the importance weighted autoencoder (IWAE), a generative model with the same architecture as the VAE, but which uses a strictly tighter log-likelihood lower bound derived from importance weighting. In the IWAE, the recognition network uses multiple samples to approximate the posterior, giving it increased flexibility to model complex posteriors which do not fit the VAE modeling assumptions. We show empirically that IWAEs learn richer latent space representations than VAEs, leading to improved test log-likelihood on density estimation benchmarks.

Yuri Burda, Roger Grosse, Ruslan Salakhutdinov• 2015

Related benchmarks

TaskDatasetResultRank
Density EstimationMNIST (test)
NLL (bits/dim)86.6
56
Log-likelihood estimationMNIST dynamically binarized (test)
Log-Likelihood82.9
48
Generative ModelingMNIST (test)--
35
Image ModelingOmniglot (test)
NLL103.4
27
Density EstimationKMNIST (test)
Log-likelihood-172.2
20
Generative ModelingKMNIST (test)
ELBO-177
20
Generative Modelingletters (test)
ELBO-132.7
20
Density EstimationOcr-letters (test)
Avg Log-Likelihood (nats)-130.6
19
Density EstimationOMNIGLOT dynamically binarized (test)
NLL103.4
16
Generative ModelingMNIST permutation-invariant (test)
Log Likelihood-82.9
10
Showing 10 of 16 rows

Other info

Follow for update