Multimodal Latent Reasoning via Hierarchical Visual Cues Injection
About
The advancement of multimodal large language models (MLLMs) has enabled impressive perception capabilities. However, their reasoning process often remains a "fast thinking" paradigm, reliant on end-to-end generation or explicit, language-centric chains of thought (CoT), which can be inefficient, verbose, and prone to hallucination. This work posits that robust reasoning should evolve within a latent space, integrating multimodal signals seamlessly. We propose multimodal latent reasoning via HIerarchical Visual cuEs injection (\emph{HIVE}), a novel framework that instills deliberate, "slow thinking" without depending on superficial textual rationales. Our method recursively extends transformer blocks, creating an internal loop for iterative reasoning refinement. Crucially, it injectively grounds this process with hierarchical visual cues from global scene context to fine-grained regional details directly into the model's latent representations. This enables the model to perform grounded, multi-step inference entirely in the aligned latent space. Extensive evaluations demonstrate that test-time scaling is effective when incorporating vision knowledge, and that integrating hierarchical information significantly enhances the model's understanding of complex scenes.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Object Hallucination Evaluation | POPE | -- | 935 | |
| Text-based Visual Question Answering | TextVQA (val) | -- | 146 | |
| Visual Question Answering | SEED-Bench Image | Accuracy70.5 | 64 | |
| OCR VQA | ChartQA (test) | Accuracy67 | 18 | |
| Visual Question Answering | ScienceQA image | Score91.6 | 17 | |
| General Visual Question Answering | MMBench en (dev) | Overall Score69.6 | 5 | |
| General Visual Question Answering | RealworldQA | Score57.9 | 3 | |
| OCR and Chart Visual Question Answering | DocVQA (val) | Score73.2 | 3 |