Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

DenseMLLM: Standard Multimodal LLMs are Intrinsic Dense Predictors

About

Multimodal Large Language Models (MLLMs) have demonstrated exceptional capabilities in high-level visual understanding. However, extending these models to fine-grained dense prediction tasks, such as semantic segmentation and depth estimation, typically necessitates the incorporation of complex, task-specific decoders and other customizations. This architectural fragmentation increases model complexity and deviates from the generalist design of MLLMs, ultimately limiting their practicality. In this work, we challenge this paradigm by accommodating standard MLLMs to perform dense predictions without requiring additional task-specific decoders. The proposed model is called DenseMLLM, grounded in the standard architecture with a novel vision token supervision strategy for multiple labels and tasks. Despite its minimalist design, our model achieves highly competitive performance across a wide range of dense prediction and vision-language benchmarks, demonstrating that a standard, general-purpose MLLM can effectively support dense perception without architectural specialization.

Yi Li, Hongze Shen, Lexiang Tang, Xin Li, Xinpeng Ding, Yinsong Liu, Deqiang Jiang, Xing Sun, Xiaomeng Li• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical Multimodal ReasoningMathVista
Accuracy76.5
218
Hallucination EvaluationPOPE
Accuracy86.4
153
Chart UnderstandingChartQA
Accuracy85.3
127
Hallucination EvaluationHallusionBench--
108
Multi-modal Question AnsweringMMBench
Accuracy83.9
55
Multimodal Question AnsweringMM-Vet
Total Score64.6
24
OCR and Chart UnderstandingOCRBench
Total Score813
20
OCR and Chart UnderstandingTextVQA
Accuracy79.6
9
General Multimodal Question AnsweringMMStar
Accuracy71.1
3
General Multimodal Question AnsweringMME
Total Score2.38e+3
3
Showing 10 of 15 rows

Other info

Follow for update