Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Improving Inference for Neural Image Compression

About

We consider the problem of lossy image compression with deep latent variable models. State-of-the-art methods build on hierarchical variational autoencoders (VAEs) and learn inference networks to predict a compressible latent representation of each data point. Drawing on the variational inference perspective on compression, we identify three approximation gaps which limit performance in the conventional approach: an amortization gap, a discretization gap, and a marginalization gap. We propose remedies for each of these three limitations based on ideas related to iterative inference, stochastic annealing for discrete optimization, and bits-back coding, resulting in the first application of bits-back coding to lossy compression. In our experiments, which include extensive baseline comparisons and ablation studies, we achieve new state-of-the-art performance on lossy image compression using an established VAE architecture, by changing only the inference method.

Yibo Yang, Robert Bamler, Stephan Mandt• 2020

Related benchmarks

TaskDatasetResultRank
3D Image CompressionHiP-CT (test)
PSNR (All)46.56
10
3D Image CompressionMouse whole-brain microscopic data Biological data (test)
Acc@200 (All Regions)80.38
10
Data CompressionMedical data
Compression Time (s)2.70e+3
9
Data CompressionBiological data 512x compression ratio
Compression Time (s)3.23e+3
9
Showing 4 of 4 rows

Other info

Follow for update