Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Learning When to Look: A Disentangled Curriculum for Strategic Perception in Multimodal Reasoning

About

Multimodal Large Language Models (MLLMs) demonstrate significant potential but remain brittle in complex, long-chain visual reasoning tasks. A critical failure mode is "visual forgetting", where models progressively lose visual grounding as reasoning extends, a phenomenon aptly described as "think longer, see less". We posit this failure stems from current training paradigms prematurely entangling two distinct cognitive skills: (1) abstract logical reasoning "how-to-think") and (2) strategic visual perception ("when-to-look"). This creates a foundational cold-start deficiency -- weakening abstract reasoning -- and a strategic perception deficit, as models lack a policy for when to perceive. In this paper, we propose a novel curriculum-based framework to disentangle these skills. First, we introduce a disentangled Supervised Fine-Tuning (SFT) curriculum that builds a robust abstract reasoning backbone on text-only data before anchoring it to vision with a novel Perception-Grounded Chain-of-Thought (PG-CoT) paradigm. Second, we resolve the strategic perception deficit by formulating timing as a reinforcement learning problem. We design a Pivotal Perception Reward that teaches the model when to look by coupling perceptual actions to linguistic markers of cognitive uncertainty (e.g., "wait", "verify"), thereby learning an autonomous grounding policy. Our contributions include the formalization of these two deficiencies and the development of a principled, two-stage framework to address them, transforming the model from a heuristic-driven observer to a strategic, grounded reasoner. \textbf{Code}: \url{https://github.com/gaozilve-max/learning-when-to-look}.

Siqi Yang, Zilve Gao, Haibo Qiu, Fanfan Liu, Peng Shi, Zhixiong Zeng, Qingmin Liao, Lin Ma• 2025

Related benchmarks

TaskDatasetResultRank
Multimodal ReasoningMM-Vet
MM-Vet Score73.5
431
Multi-discipline Multimodal UnderstandingMMMU--
317
Mathematical Multimodal ReasoningMathVista
Accuracy73.4
218
Multimodal Math ReasoningMathVision
Accuracy44.2
183
Multimodal Perception and CognitionMME--
182
Multimodal Math ReasoningWeMath
Accuracy54.4
168
Multimodal UnderstandingMMBench (MMB)--
141
Multimodal Mathematical ReasoningLogicVista
Accuracy49.9
34
Multimodal Mathematical ReasoningMathVerse-V
Accuracy53.3
33
Multimodal Hallucination EvaluationHallusionBench
Hallucination Score68.7
22
Showing 10 of 12 rows

Other info

Follow for update