MIRROR: Multimodal Iterative Reasoning via Reflection on Visual Regions
About
In the era of Vision-Language Models (VLMs), enhancing multimodal reasoning capabilities remains a critical challenge, particularly in handling ambiguous or complex visual inputs, where initial inferences often lead to hallucinations or logic errors. Existing VLMs often produce plausible yet ungrounded answers, and even when prompted to "reflect", their corrections may remain detached from the image evidence. To address this, we propose the MIRROR framework for Multimodal Iterative Reasoning via Reflection On visual Regions. By embedding visual reflection as a core mechanism, MIRROR is formulated as a closed-loop process comprising draft, critique, region-based verification, and revision, which are repeated until the output is visually grounded. To facilitate training of this model, we construct **ReflectV**, a visual reflective dataset for multi-turn supervision that explicitly contains reflection triggers, region-based verification actions, and answer revision grounded in visual evidence. Experiments on both general vision-language benchmarks and representative vision-language reasoning benchmarks show that MIRROR improves correctness and reduces visual hallucinations, demonstrating the value of training reflection as an evidence-seeking, region-aware verification process rather than a purely textual revision step.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Object Hallucination Evaluation | POPE | Accuracy94.42 | 935 | |
| Multimodal Understanding | MM-Vet | MM-Vet Score66.7 | 418 | |
| Multimodal Reasoning | MM-Vet | MM-Vet Score66.7 | 281 | |
| Multimodal Understanding | MMStar | Accuracy73.33 | 197 | |
| Hallucination Evaluation | POPE | Accuracy94.42 | 132 | |
| Hallucination Evaluation | HallusionBench | -- | 93 | |
| Optical Character Recognition | OCRBench | OCRBench Score92 | 83 | |
| Multimodal Understanding | SEEDBench2 Plus | Accuracy76.86 | 38 | |
| Mathematical Reasoning | MathVision | Accuracy28.29 | 38 | |
| Fine-grained visual understanding | HR-Bench-4K | Score72.88 | 24 |