Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Decompose, Look, and Reason: Reinforced Latent Reasoning for VLMs

About

Vision-Language Models often struggle with complex visual reasoning due to the visual information loss in textual CoT. Existing methods either add the cost of tool calls or rely on localized patch-based embeddings that are insufficient to extract semantics in multi-step reasoning. We propose \emph{"Decompose, Look, and Reason" (DLR)}, a reinforced latent reasoning framework that dynamically decomposes queries into textual premises, extracts premise-conditioned continuous visual latents, and deduces answers through grounded rationales. We introduce a three-stage training pipeline and propose a novel Spherical Gaussian Latent Policy to enable effective exploration in the latent space. Extensive experiments on vision-centric benchmarks show that DLR consistently outperforms strong baselines, including text-only, interleaved multimodal CoT, and latent reasoning methods, while providing superior stepwise interpretability.

Mengdan Zhu, Senhao Cheng, Liang Zhao• 2026

Related benchmarks

TaskDatasetResultRank
Multimodal UnderstandingMMStar
Accuracy65.2
324
Multimodal ReasoningMMMU-Pro
Accuracy56.1
107
Visual Mathematical ReasoningMathVista (testmini)
Accuracy67.5
48
Visual Perception and ReasoningV*
Overall Accuracy83.8
18
Showing 4 of 4 rows

Other info

Follow for update