Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

From Pixels to Words -- Towards Native Vision-Language Primitives at Scale

About

The edifice of native Vision-Language Models (VLMs) has emerged as a rising contender to typical modular VLMs, shaped by evolving model architectures and training paradigms. Yet, two lingering clouds cast shadows over its widespread exploration and promotion: (-) What fundamental constraints set native VLMs apart from modular ones, and to what extent can these barriers be overcome? (-) How to make research in native VLMs more accessible and democratized, thereby accelerating progress in the field. In this paper, we clarify these challenges and outline guiding principles for constructing native VLMs. Specifically, one native VLM primitive should: (i) effectively align pixel and word representations within a shared semantic space; (ii) seamlessly integrate the strengths of formerly separate vision and language modules; (iii) inherently embody various cross-modal properties that support unified vision-language encoding, aligning, and reasoning. Hence, we launch NEO, a novel family of native VLMs built from first principles, greatly narrowing the gap with top-tier modular counterparts across diverse real-world scenarios. With 390M image-text examples, NEO efficiently develops visual perception from scratch while mitigating vision-language conflicts inside a dense and monolithic model crafted from our elaborate primitives. We position NEO as a cornerstone for scalable and powerful native VLM development, paired with a rich set of reusable components that foster a cost-effective and extensible ecosystem. Our code and models are publicly available at: https://github.com/EvolvingLMMs-Lab/NEO.

Haiwen Diao, Mingxuan Li, Silei Wu, Linjun Dai, Xiaohua Wang, Hanming Deng, Lewei Lu, Dahua Lin, Ziwei Liu• 2025

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE--
935
Text-based Visual Question AnsweringTextVQA
Accuracy75
496
Multi-discipline Multimodal UnderstandingMMMU--
266
Chart Question AnsweringChartQA
Accuracy82.1
229
Visual Question AnsweringAI2D
Accuracy83.1
174
Document Visual Question AnsweringDocVQA
ANLS89.9
164
Optical Character Recognition EvaluationOCRBench
Score77.7
46
Infographic Visual Question AnsweringInfoVQA
Accuracy63.2
40
Multi-modal Vision-Language UnderstandingMMVet
Score53.6
38
General Vision-Language UnderstandingMMB
Score82.1
25
Showing 10 of 13 rows

Other info

Follow for update