Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

UniFlow: A Unified Pixel Flow Tokenizer for Visual Understanding and Generation

About

Tokenizer is a crucial component for both visual understanding and generation. To advance toward the ultimate goal of universal modeling, recent research has focused on developing a unified tokenizer. However, existing tokenizers face a significant performance trade-off between understanding and generation, stemming from the inherent conflict between high-level semantic abstraction and low-level pixel reconstruction. To tackle this challenge, we propose a generic and unified tokenizer, namely UniFlow, by flexibly adapting any visual encoder with a concise reconstruction decoder. Specifically, we introduce layer-wise adaptive self-distillation applied to the well-pretrained visual encoders, which enables UniFlow to simultaneously inherit the strong semantic features for visual understanding and flexibly adapt to model fine-grained details for visual generation. Moreover, we propose a lightweight patch-wise pixel flow decoder, which efficiently achieves high-fidelity pixel reconstruction by modeling a conditional flow from the noisy state back to the patch-wise pixel domain. By leveraging the semantic features as visual conditions for the decoder, we effectively alleviate the training conflicts between understanding and generation. Furthermore, the patch-wise learning strategy simplifies the data distribution, thereby improving training efficiency. Extensive experiments across 13 challenging benchmarks spanning 7 widely studied visual understanding and generation tasks demonstrate that UniFlow achieves a win-win outcome. For instance, our 7B UniFlow-XL not only surpasses the 14B TokenFlow-XL by 6.05% on average understanding benchmarks, but also achieves a competitive results in both visual reconstruction and generation, surpassing UniTok by 0.15 in rFID and 0.09 in gFID (without guidance), respectively.

Zhengrong Yue, Haiyu Zhang, Xiangyu Zeng, Boyu Chen, Chenting Wang, Shaobin Zhuang, Lu Dong, Yi Wang, Limin Wang, Yali Wang• 2025

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE--
1455
Semantic segmentationADE20K
mIoU55.4
1024
Multimodal UnderstandingMMBench--
637
Class-conditional Image GenerationImageNet 256x256 (val)--
427
Multi-discipline Multimodal UnderstandingMMMU--
317
Object DetectionMS-COCO 2017 (val)--
237
Multimodal UnderstandingMME
MME Score2.06e+3
207
Visual Question AnsweringGQA
Score65.86
193
Class-conditional Image GenerationImageNet 256x256 (train val)--
178
Text-to-Image GenerationDPG-Bench
DPG Score84.76
131
Showing 10 of 19 rows

Other info

Follow for update