Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DenseMLLM: Standard Multimodal LLMs are Intrinsic Dense Predictors

About

Multimodal Large Language Models (MLLMs) have demonstrated exceptional capabilities in high-level visual understanding. However, extending these models to fine-grained dense prediction tasks, such as semantic segmentation and depth estimation, typically necessitates the incorporation of complex, task-specific decoders and other customizations. This architectural fragmentation increases model complexity and deviates from the generalist design of MLLMs, ultimately limiting their practicality. In this work, we challenge this paradigm by accommodating standard MLLMs to perform dense predictions without requiring additional task-specific decoders. The proposed model is called DenseMLLM, grounded in the standard architecture with a novel vision token supervision strategy for multiple labels and tasks. Despite its minimalist design, our model achieves highly competitive performance across a wide range of dense prediction and vision-language benchmarks, demonstrating that a standard, general-purpose MLLM can effectively support dense perception without architectural specialization.

Yi Li, Hongze Shen, Lexiang Tang, Xin Li, Xinpeng Ding, Yinsong Liu, Deqiang Jiang, Xing Sun, Xiaomeng Li• 2026

Related benchmarks

TaskDatasetResultRank
Hallucination EvaluationPOPE
Accuracy86.4
132
Hallucination EvaluationHallusionBench--
93
Chart UnderstandingChartQA
Accuracy85.3
83
Mathematical Multimodal ReasoningMathVista
Accuracy76.5
46
Multi-modal Question AnsweringMMBench
Accuracy83.9
30
Multimodal Question AnsweringMM-Vet
Total Score64.6
24
General Multimodal Question AnsweringMMStar
Accuracy71.1
3
General Multimodal Question AnsweringMME
Total Score2.38e+3
3
Multimodal Hallucination and Real-world EvaluationRealworldQA
Accuracy74.6
3
Multimodal Reasoning and MathematicsM-Verse
Accuracy56.5
3
Showing 10 of 15 rows

Other info

Follow for update