ImageFolder: Autoregressive Image Generation with Folded Tokens
About
Image tokenizers are crucial for visual generative models, e.g., diffusion models (DMs) and autoregressive (AR) models, as they construct the latent representation for modeling. Increasing token length is a common approach to improve the image reconstruction quality. However, tokenizers with longer token lengths are not guaranteed to achieve better generation quality. There exists a trade-off between reconstruction and generation quality regarding token length. In this paper, we investigate the impact of token length on both image reconstruction and generation and provide a flexible solution to the tradeoff. We propose ImageFolder, a semantic tokenizer that provides spatially aligned image tokens that can be folded during autoregressive modeling to improve both generation efficiency and quality. To enhance the representative capability without increasing token length, we leverage dual-branch product quantization to capture different contexts of images. Specifically, semantic regularization is introduced in one branch to encourage compacted semantic information while another branch is designed to capture the remaining pixel-level details. Extensive experiments demonstrate the superior quality of image generation and shorter token length with ImageFolder tokenizer.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Class-conditional Image Generation | ImageNet 256x256 | -- | 441 | |
| Class-conditional Image Generation | ImageNet 256x256 (train) | IS295 | 305 | |
| Image Reconstruction | ImageNet 256x256 | rFID0.8 | 93 | |
| Image Reconstruction | ImageNet1K (val) | FID0.8 | 83 | |
| Image Generation | ImageNet | FID2.6 | 68 | |
| Class-conditional Image Generation | ImageNet 256x256 2012 (val) | FID2.6 | 38 | |
| Image Generation | ImageNet 256x256 (test val) | FID2.6 | 35 | |
| Image Reconstruction | ImageNet 256x256 2012 (val) | rFID0.8 | 20 |