Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Autoregressive Image Generation without Vector Quantization

About

Conventional wisdom holds that autoregressive models for image generation are typically accompanied by vector-quantized tokens. We observe that while a discrete-valued space can facilitate representing a categorical distribution, it is not a necessity for autoregressive modeling. In this work, we propose to model the per-token probability distribution using a diffusion procedure, which allows us to apply autoregressive models in a continuous-valued space. Rather than using categorical cross-entropy loss, we define a Diffusion Loss function to model the per-token probability. This approach eliminates the need for discrete-valued tokenizers. We evaluate its effectiveness across a wide range of cases, including standard autoregressive models and generalized masked autoregressive (MAR) variants. By removing vector quantization, our image generator achieves strong results while enjoying the speed advantage of sequence modeling. We hope this work will motivate the use of autoregressive generation in other continuous-valued domains and applications. Code is available at: https://github.com/LTH14/mar.

Tianhong Li, Yonglong Tian, He Li, Mingyang Deng, Kaiming He• 2024

Related benchmarks

TaskDatasetResultRank
Class-conditional Image GenerationImageNet 256x256
Inception Score (IS)304.1
441
Image GenerationImageNet 256x256 (val)
FID1.55
307
Class-conditional Image GenerationImageNet 256x256 (train)
IS303.7
305
Class-conditional Image GenerationImageNet 256x256 (val)
FID1.55
293
Text-to-Image GenerationGenEval
GenEval Score75.75
277
Image GenerationImageNet 256x256
FID1.55
243
Image GenerationImageNet (val)
FID1.78
198
Class-conditional Image GenerationImageNet 256x256 (train val)
FID1.55
178
Class-conditional Image GenerationImageNet 256x256 (test)
FID1.55
167
Image ReconstructionImageNet 256x256
rFID0.53
93
Showing 10 of 42 rows

Other info

Code

Follow for update