Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

See the Text: From Tokenization to Visual Reading

About

People see text. Humans read by recognizing words as visual objects, including their shapes, layouts, and patterns, before connecting them to meaning, which enables us to handle typos, distorted fonts, and various scripts effectively. Modern large language models (LLMs), however, rely on subword tokenization, fragmenting text into pieces from a fixed vocabulary. While effective for high-resource languages, this approach over-segments low-resource languages, yielding long, linguistically meaningless sequences and inflating computation. In this work, we challenge this entrenched paradigm and move toward a vision-centric alternative. Our method, SeeTok, renders text as images (visual-text) and leverages pretrained multimodal LLMs to interpret them, reusing strong OCR and text-vision alignment abilities learned from large-scale multimodal training. Across three different language tasks, SeeTok matches or surpasses subword tokenizers while requiring 4.43 times fewer tokens and reducing FLOPs by 70.5%, with additional gains in cross-lingual generalization, robustness to typographic noise, and linguistic hierarchy. SeeTok signals a shift from symbolic tokenization to human-like visual reading, and takes a step toward more natural and cognitively inspired language models.

Ling Xing, Rui Yan, Alex Jinpeng Wang, Zechao Li, Jinhui Tang• 2025

Related benchmarks

TaskDatasetResultRank
Question AnsweringTriviaQA
EM43.53
182
Knowledge ReasoningMMLU
MMLU Knowledge Reasoning Accuracy52.52
65
Sentiment ClassificationSST-5
Accuracy44.4
46
Question AnsweringNQ
Exact Match24.14
46
Question AnsweringPopQA
Exact Match24.26
25
Multilingual TranslationWMT22 (de, cs, zh, ru) and WMT21 (is) (test)
COMET (de)65.63
4
Showing 6 of 6 rows

Other info

Follow for update