Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Latent Denoising Makes Good Tokenizers

About

Despite their fundamental role, it remains unclear what properties could make tokenizers more effective for generative modeling. We observe that modern generative models share a conceptually similar training objective -- reconstructing clean signals from corrupted inputs, such as signals degraded by Gaussian noise or masking -- a process we term denoising. Motivated by this insight, we propose aligning tokenizer embeddings directly with the downstream denoising objective, encouraging latent embeddings that remain reconstructable even under significant corruption. To achieve this, we introduce the Latent Denoising Tokenizer (l-DeTok), a simple yet highly effective tokenizer trained to reconstruct clean images from latent embeddings corrupted via interpolative noise or random masking. Extensive experiments on class-conditioned (ImageNet 256x256 and 512x512) and text-conditioned (MSCOCO) image generation benchmarks demonstrate that our l-DeTok consistently improves generation quality across six representative generative models compared to prior tokenizers. Our findings highlight denoising as a fundamental design principle for tokenizer development, and we hope it could motivate new perspectives for future tokenizer design.

Jiawei Yang, Tianhong Li, Lijie Fan, Yonglong Tian, Yue Wang• 2025

Related benchmarks

TaskDatasetResultRank
Class-conditional Image GenerationImageNet 256x256
Inception Score (IS)304.1
441
Image GenerationImageNet (val)
FID2.18
198
Class-conditional Image GenerationImageNet 512x512 (val)
FID (Val)1.61
69
Image ReconstructionImageNet (val)
rFID0.68
54
Text-to-Image GenerationMS-COCO 30k (val)
FID4.31
42
Showing 5 of 5 rows

Other info

Follow for update