DenseMLLM: Standard Multimodal LLMs are Intrinsic Dense Predictors
About
Multimodal Large Language Models (MLLMs) have demonstrated exceptional capabilities in high-level visual understanding. However, extending these models to fine-grained dense prediction tasks, such as semantic segmentation and depth estimation, typically necessitates the incorporation of complex, task-specific decoders and other customizations. This architectural fragmentation increases model complexity and deviates from the generalist design of MLLMs, ultimately limiting their practicality. In this work, we challenge this paradigm by accommodating standard MLLMs to perform dense predictions without requiring additional task-specific decoders. The proposed model is called DenseMLLM, grounded in the standard architecture with a novel vision token supervision strategy for multiple labels and tasks. Despite its minimalist design, our model achieves highly competitive performance across a wide range of dense prediction and vision-language benchmarks, demonstrating that a standard, general-purpose MLLM can effectively support dense perception without architectural specialization.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Hallucination Evaluation | POPE | Accuracy86.4 | 132 | |
| Hallucination Evaluation | HallusionBench | -- | 93 | |
| Chart Understanding | ChartQA | Accuracy85.3 | 83 | |
| Mathematical Multimodal Reasoning | MathVista | Accuracy76.5 | 46 | |
| Multi-modal Question Answering | MMBench | Accuracy83.9 | 30 | |
| Multimodal Question Answering | MM-Vet | Total Score64.6 | 24 | |
| General Multimodal Question Answering | MMStar | Accuracy71.1 | 3 | |
| General Multimodal Question Answering | MME | Total Score2.38e+3 | 3 | |
| Multimodal Hallucination and Real-world Evaluation | RealworldQA | Accuracy74.6 | 3 | |
| Multimodal Reasoning and Mathematics | M-Verse | Accuracy56.5 | 3 |