Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images

About

We present a hierarchical VAE that, for the first time, generates samples quickly while outperforming the PixelCNN in log-likelihood on all natural image benchmarks. We begin by observing that, in theory, VAEs can actually represent autoregressive models, as well as faster, better models if they exist, when made sufficiently deep. Despite this, autoregressive models have historically outperformed VAEs in log-likelihood. We test if insufficient depth explains why by scaling a VAE to greater stochastic depth than previously explored and evaluating it CIFAR-10, ImageNet, and FFHQ. In comparison to the PixelCNN, these very deep VAEs achieve higher likelihoods, use fewer parameters, generate samples thousands of times faster, and are more easily applied to high-resolution images. Qualitative studies suggest this is because the VAE learns efficient hierarchical visual representations. We release our source code and models at https://github.com/openai/vdvae.

Rewon Child• 2020

Related benchmarks

TaskDatasetResultRank
Image GenerationCIFAR-10 (test)--
471
Density EstimationCIFAR-10 (test)
Bits/dim2.87
134
Density EstimationImageNet 32x32 (test)
Bits per Sub-pixel3.8
66
Generative ModelingCIFAR-10 (test)
NLL (bits/dim)2.87
62
Density EstimationImageNet 64x64 (test)
Bits Per Sub-Pixel3.52
62
Image GenerationFFHQ
FID33.5
52
Density EstimationCIFAR-10
bpd2.87
40
Unconditional image synthesisFFHQ 256x256 (test)
FID28.5
31
Unconditional Image GenerationFFHQ 256x256 (test)
FID28.5
25
Unconditional image modelingImageNet 64x64
Bits/Dim3.52
17
Showing 10 of 25 rows

Other info

Code

Follow for update