Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Kelix Technical Report

About

Autoregressive large language models (LLMs) scale well by expressing diverse tasks as sequences of discrete natural-language tokens and training with next-token prediction, which unifies comprehension and generation under self-supervision. Extending this paradigm to multimodal data requires a shared, discrete representation across modalities. However, most vision-language models (VLMs) still rely on a hybrid interface: discrete text tokens paired with continuous Vision Transformer (ViT) features. Because supervision is largely text-driven, these models are often biased toward understanding and cannot fully leverage large-scale self-supervised learning on non-text data. Recent work has explored discrete visual tokenization to enable fully autoregressive multimodal modeling, showing promising progress toward unified understanding and generation. Yet existing discrete vision tokens frequently lose information due to limited code capacity, resulting in noticeably weaker understanding than continuous-feature VLMs. We present Kelix, a fully discrete autoregressive unified model that closes the understanding gap between discrete and continuous visual representations.

Boyang Ding, Chenglong Chu, Dunju Zang, Han Li, Jiangxia Cao, Kun Gai, Muhao Wei, Ruiming Tang, Shiyao Wang, Siyang Mao, Xinchen Luo, Yahui Liu, Zhixin Ling, Zhuoran Yang, Ziming Li, Chengru Song, Guorui Zhou, Guowang Zhang, Hao Peng, Hao Wang, Jiaxin Deng, Jin Ouyang, Jinghao Zhang, Lejian Ren, Qianqian Wang, Qigen Hu, Tao Wang, Xingmei Wang, Yiping Yang, Zixing Zhang, Ziqi Wang• 2026

Related benchmarks

TaskDatasetResultRank
Text-based Visual Question AnsweringTextVQA
Accuracy81.4
807
Text-to-Image GenerationGenEval
Overall Score87.6
506
Mathematical ReasoningMathVista
Score76.5
385
Visual Question AnsweringChartQA--
371
Multimodal UnderstandingSEED-Bench--
343
OCR EvaluationOCRBench
Score86.7
329
Multi-discipline Multimodal UnderstandingMMMU--
317
Text-to-Image GenerationDPG-Bench
Overall Score85.5
265
Diagram UnderstandingAI2D (test)
Accuracy82.4
131
Multi-modal UnderstandingMMBench EN
Overall Score80.2
55
Showing 10 of 10 rows

Other info

Follow for update