Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Latent Implicit Visual Reasoning

About

While Large Multimodal Models (LMMs) have made significant progress, they remain largely text-centric, relying on language as their core reasoning modality. As a result, they are limited in their ability to handle reasoning tasks that are predominantly visual. Recent approaches have sought to address this by supervising intermediate visual steps with helper images, depth maps, or image crops. However, these strategies impose restrictive priors on what "useful" visual abstractions look like, add heavy annotation costs, and struggle to generalize across tasks. To address this critical limitation, we propose a task-agnostic mechanism that trains LMMs to discover and use visual reasoning tokens without explicit supervision. These tokens attend globally and re-encode the image in a task-adaptive way, enabling the model to extract relevant visual information without hand-crafted supervision. Our approach outperforms direct fine-tuning and achieves state-of-the-art results on a diverse range of vision-centric tasks -- including those where intermediate abstractions are hard to specify -- while also generalizing to multi-task instruction tuning.

Kelvin Li, Chuyi Shang, Leonid Karlinsky, Rogerio Feris, Trevor Darrell, Roei Herzig• 2025

Related benchmarks

TaskDatasetResultRank
Hallucination EvaluationPOPE
Accuracy89.5
132
Real-world Visual Question AnsweringRealworldQA
Accuracy69.2
91
High-resolution Image ComprehensionHRBench
HRBench 4K Score0.724
9
Visual Perception and ReasoningBLINK
Accuracy56.3
9
Visual PerceptionV*Bench
Accuracy79.1
9
Visual PerceptionCVBench
2D Score73.9
5
Showing 6 of 6 rows

Other info

Follow for update