Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning When to Look: A Disentangled Curriculum for Strategic Perception in Multimodal Reasoning

About

Multimodal Large Language Models (MLLMs) demonstrate significant potential but remain brittle in complex, long-chain visual reasoning tasks. A critical failure mode is "visual forgetting", where models progressively lose visual grounding as reasoning extends, a phenomenon aptly described as "think longer, see less". We posit this failure stems from current training paradigms prematurely entangling two distinct cognitive skills: (1) abstract logical reasoning "how-to-think") and (2) strategic visual perception ("when-to-look"). This creates a foundational cold-start deficiency -- weakening abstract reasoning -- and a strategic perception deficit, as models lack a policy for when to perceive. In this paper, we propose a novel curriculum-based framework to disentangle these skills. First, we introduce a disentangled Supervised Fine-Tuning (SFT) curriculum that builds a robust abstract reasoning backbone on text-only data before anchoring it to vision with a novel Perception-Grounded Chain-of-Thought (PG-CoT) paradigm. Second, we resolve the strategic perception deficit by formulating timing as a reinforcement learning problem. We design a Pivotal Perception Reward that teaches the model when to look by coupling perceptual actions to linguistic markers of cognitive uncertainty (e.g., "wait", "verify"), thereby learning an autonomous grounding policy. Our contributions include the formalization of these two deficiencies and the development of a principled, two-stage framework to address them, transforming the model from a heuristic-driven observer to a strategic, grounded reasoner. \textbf{Code}: \url{https://github.com/gaozilve-max/learning-when-to-look}.

Siqi Yang, Zilve Gao, Haibo Qiu, Fanfan Liu, Peng Shi, Zhixiong Zeng, Qingmin Liao, Lin Ma• 2025

Related benchmarks

TaskDatasetResultRank
Multimodal ReasoningMM-Vet
MM-Vet Score73.5
281
Multi-discipline Multimodal UnderstandingMMMU--
266
Multimodal Perception and CognitionMME--
103
Multimodal UnderstandingMMBench (MMB)--
69
Mathematical Multimodal ReasoningMathVista
Accuracy73.4
46
Multimodal Math ReasoningMathVision
Accuracy44.2
31
Multimodal Math ReasoningWeMath
Accuracy54.4
26
Multimodal Mathematical ReasoningMathVerse-V
Accuracy53.3
17
Multimodal Mathematical ReasoningLogicVista
Accuracy49.9
15
Multimodal Hallucination EvaluationHallusionBench
Hallucination Score68.7
14
Showing 10 of 12 rows

Other info

Follow for update