Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MIRROR: Multimodal Iterative Reasoning via Reflection on Visual Regions

About

In the era of Vision-Language Models (VLMs), enhancing multimodal reasoning capabilities remains a critical challenge, particularly in handling ambiguous or complex visual inputs, where initial inferences often lead to hallucinations or logic errors. Existing VLMs often produce plausible yet ungrounded answers, and even when prompted to "reflect", their corrections may remain detached from the image evidence. To address this, we propose the MIRROR framework for Multimodal Iterative Reasoning via Reflection On visual Regions. By embedding visual reflection as a core mechanism, MIRROR is formulated as a closed-loop process comprising draft, critique, region-based verification, and revision, which are repeated until the output is visually grounded. To facilitate training of this model, we construct **ReflectV**, a visual reflective dataset for multi-turn supervision that explicitly contains reflection triggers, region-based verification actions, and answer revision grounded in visual evidence. Experiments on both general vision-language benchmarks and representative vision-language reasoning benchmarks show that MIRROR improves correctness and reduces visual hallucinations, demonstrating the value of training reflection as an evidence-seeking, region-aware verification process rather than a purely textual revision step.

Haoyu Zhang, Yuwei Wu, Pengxiang Li, Xintong Zhang, Zhi Gao, Rui Gao, Mingyang Gao, Che Sun, Yunde Jia• 2026

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE
Accuracy94.42
1455
Multimodal UnderstandingMM-Vet
MM-Vet Score66.7
531
Multimodal ReasoningMM-Vet
MM-Vet Score66.7
431
Multimodal UnderstandingMMStar
Accuracy73.33
324
Optical Character RecognitionOCRBench--
232
Hallucination EvaluationPOPE
Accuracy94.42
153
Mathematical ReasoningMathVision
Accuracy28.29
144
Hallucination EvaluationHallusionBench--
108
Multimodal UnderstandingSEEDBench2 Plus
Accuracy76.86
74
OCR-related Understanding TasksTextVQA (val)
Accuracy86.62
57
Showing 10 of 17 rows

Other info

Follow for update