Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Generating Images with Sparse Representations

About

The high dimensionality of images presents architecture and sampling-efficiency challenges for likelihood-based generative models. Previous approaches such as VQ-VAE use deep autoencoders to obtain compact representations, which are more practical as inputs for likelihood-based models. We present an alternative approach, inspired by common image compression methods like JPEG, and convert images to quantized discrete cosine transform (DCT) blocks, which are represented sparsely as a sequence of DCT channel, spatial location, and DCT coefficient triples. We propose a Transformer-based autoregressive architecture, which is trained to sequentially predict the conditional distribution of the next element in such sequences, and which scales effectively to high resolution images. On a range of image datasets, we demonstrate that our approach can generate high quality, diverse images, with sample metric scores competitive with state of the art methods. We additionally show that simple modifications to our method yield effective image colorization and super-resolution models.

Charlie Nash, Jacob Menick, Sander Dieleman, Peter W. Battaglia• 2021

Related benchmarks

TaskDatasetResultRank
Class-conditional Image GenerationImageNet 256x256 (val)--
293
Image GenerationImageNet 256x256
FID36.51
243
Class-conditional Image GenerationImageNet 256x256 (train val)
FID36.51
178
Unconditional Image GenerationLSUN Bedrooms unconditional
FID6.4
96
Conditional Image GenerationImageNet-1K 256x256 (val)
gFID36.51
86
Class-conditional image synthesisImageNet 256x256 (val)
FID36.5
61
Unconditional Image GenerationFFHQ 256x256 (test)
FID13.06
25
Image GenerationLSUN Churches 256x256
FID7.56
21
Unconditional Image GenerationLSUN Church (test)
FID7.56
17
Unconditional image synthesisFFHQ
FID13.06
15
Showing 10 of 17 rows

Other info

Follow for update