Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Efficient-VDVAE: Less is more

About

Hierarchical VAEs have emerged in recent years as a reliable option for maximum likelihood estimation. However, instability issues and demanding computational requirements have hindered research progress in the area. We present simple modifications to the Very Deep VAE to make it converge up to $2.6\times$ faster, save up to $20\times$ in memory load and improve stability during training. Despite these changes, our models achieve comparable or better negative log-likelihood performance than current state-of-the-art models on all $7$ commonly used image datasets we evaluated on. We also make an argument against using 5-bit benchmarks as a way to measure hierarchical VAE's performance due to undesirable biases caused by the 5-bit quantization. Additionally, we empirically demonstrate that roughly $3\%$ of the hierarchical VAE's latent space dimensions is sufficient to encode most of the image information, without loss of performance, opening up the doors to efficiently leverage the hierarchical VAEs' latent space in downstream tasks. We release our source code and models at https://github.com/Rayhane-mamah/Efficient-VDVAE .

Louay Hazami, Rayhane Mama, Ragavan Thurairatnam• 2022

Related benchmarks

TaskDatasetResultRank
Image GenerationCelebA 64 x 64 (test)--
203
Image GenerationCIFAR10 32x32 (test)--
154
Density EstimationCIFAR-10
bpd2.87
40
Density EstimationImageNet 32 x 32
NLL (bits/dim)3.58
12
Density EstimationImageNet 64 x 64
NLL (bits/dim)3.3
8
Density EstimationCelebAHQ 256 x 256 5-bits
NLL (bits/dim)0.51
8
Image GenerationFFHQ (test val)
Recall0.14
8
Density EstimationMNIST
NLL (nats)79.09
5
Density EstimationCelebA 64 x 64
NLL (bits/dim)1.83
4
Density EstimationFFHQ 256 x 256 5-bits
NLL (bits/dim)0.53
4
Showing 10 of 14 rows

Other info

Code

Follow for update