Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Hierarchical Quantized Autoencoders

About

Despite progress in training neural networks for lossy image compression, current approaches fail to maintain both perceptual quality and abstract features at very low bitrates. Encouraged by recent success in learning discrete representations with Vector Quantized Variational Autoencoders (VQ-VAEs), we motivate the use of a hierarchy of VQ-VAEs to attain high factors of compression. We show that the combination of stochastic quantization and hierarchical latent structure aids likelihood-based image compression. This leads us to introduce a novel objective for training hierarchical VQ-VAEs. Our resulting scheme produces a Markovian series of latent variables that reconstruct images of high-perceptual quality which retain semantically meaningful features. We provide qualitative and quantitative evaluations on the CelebA and MNIST datasets.

Will Williams, Sam Ringer, Tom Ash, John Hughes, David MacLeod, Jamie Dougherty• 2020

Related benchmarks

TaskDatasetResultRank
Image ReconstructionCIFAR-10
LPIPS0.2553
25
Image ReconstructionMNIST--
24
Image ReconstructionMNIST (val)
L1 Loss0.0202
6
Image ReconstructionCIFAR10 (val)
L1 Loss0.0533
6
Showing 4 of 4 rows

Other info

Follow for update