Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SFTok: Bridging the Performance Gap in Discrete Tokenizers

About

Recent advances in multimodal models highlight the pivotal role of image tokenization in high-resolution image generation. By compressing images into compact latent representations, tokenizers enable generative models to operate in lower-dimensional spaces, thereby improving computational efficiency and reducing complexity. Discrete tokenizers naturally align with the autoregressive paradigm but still lag behind continuous ones, limiting their adoption in multimodal systems. To address this, we propose \textbf{SFTok}, a discrete tokenizer that incorporates a multi-step iterative mechanism for precise reconstruction. By integrating \textbf{self-forcing guided visual reconstruction} and \textbf{debias-and-fitting training strategy}, SFTok resolves the training-inference inconsistency in multi-step process, significantly enhancing image reconstruction quality. At a high compression rate of only 64 tokens per image, SFTok achieves state-of-the-art reconstruction quality on ImageNet (rFID = 1.21) and demonstrates exceptional performance in class-to-image generation tasks (gFID = 2.29).

Qihang Rao, Borui Zhang, Wenzhao Zheng, Jie Zhou, Jiwen Lu• 2025

Related benchmarks

TaskDatasetResultRank
Class-conditional Image GenerationImageNet--
132
Image ReconstructionImageNet1K (val)
FID1.21
83
Showing 2 of 2 rows

Other info

Follow for update