LongCat-Next: Lexicalizing Modalities as Discrete Tokens
About
The prevailing Next-Token Prediction (NTP) paradigm has driven the success of large language models through discrete autoregressive modeling. However, contemporary multimodal systems remain language-centric, often treating non-linguistic modalities as external attachments, leading to fragmented architectures and suboptimal integration. To transcend this limitation, we introduce Discrete Native Autoregressive (DiNA), a unified framework that represents multimodal information within a shared discrete space, enabling a consistent and principled autoregressive modeling across modalities. A key innovation is the Discrete Native Any-resolution Visual Transformer (dNaViT), which performs tokenization and de-tokenization at arbitrary resolutions, transforming continuous visual signals into hierarchical discrete tokens. Building on this foundation, we develop LongCat-Next, a native multimodal model that processes text, vision, and audio under a single autoregressive objective with minimal modality-specific design. As an industrial-strength foundation model, it excels at seeing, painting, and talking within a single framework, achieving strong performance across a wide range of multimodal benchmarks. In particular, LongCat-Next addresses the long-standing performance ceiling of discrete vision modeling on understanding tasks and provides a unified approach to effectively reconcile the conflict between understanding and generation. As an attempt toward native multimodality, we open-source the LongCat-Next and its tokenizers, hoping to foster further research and development in the community. GitHub: https://github.com/meituan-longcat/LongCat-Next
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Automatic Speech Recognition | LibriSpeech clean (test) | WER1.63 | 1156 | |
| Automatic Speech Recognition | LibriSpeech (test-other) | WER3.42 | 1151 | |
| Mathematical Reasoning | MathVista | Score83.1 | 385 | |
| Multimodal Understanding | MMStar | -- | 324 | |
| Document Visual Question Answering | DocVQA | -- | 263 | |
| Optical Character Recognition | OCRBench | -- | 232 | |
| Automatic Speech Recognition | WenetSpeech Meeting (test) | -- | 78 | |
| Multimodal Understanding and Generation | WISE | Overall Accuracy57 | 62 | |
| Multimodal Understanding | MMMU | MMMU Score70.6 | 59 | |
| Automatic Speech Recognition | WenetSpeech Net (test) | -- | 57 |