Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

See It, Say It, Sorted: An Iterative Training-Free Framework for Visually-Grounded Multimodal Reasoning in LVLMs

About

Recent large vision-language models (LVLMs) have demonstrated impressive reasoning ability by generating long chain-of-thought (CoT) responses. However, CoT reasoning in multimodal contexts is highly vulnerable to visual hallucination propagation: once an intermediate reasoning step becomes inconsistent with the visual evidence, subsequent steps-even if logically valid-can still lead to incorrect final answers. Existing solutions attempt to mitigate this issue by training models to "think with images" via reinforcement learning (RL). While effective, these methods are costly, model-specific, and difficult to generalize across architectures. Differently, we present a lightweight method that bypasses RL training and provides an iterative, training-free, plug-and-play framework for visually-grounded multimodal reasoning. Our key idea is to supervise each reasoning step at test time with visual evidence, ensuring that every decoded token is justified by corresponding visual cues. Concretely, we construct a textual visual-evidence pool that guides the model's reasoning generation. When existing evidence is insufficient, a visual decider module dynamically extracts additional relevant evidence from the image based on the ongoing reasoning context, expanding the pool until the model achieves sufficient visual certainty to terminate reasoning and produce the final answer. Extensive experiments on multiple LVLM backbones and benchmarks demonstrate the effectiveness of our approach. Our method achieves 16.5%-29.5% improvements on TreeBench and 13.7% RH-AUC gains on RH-Bench, substantially reducing hallucination rates while improving reasoning accuracy without additional training.

Yongchang Zhang, Oliver Ma, Tianyi Liu, Guangquan Zhou, Yang Chen• 2026

Related benchmarks

TaskDatasetResultRank
Chart Question AnsweringChartQA
Accuracy88.3
356
Visual Mathematical ReasoningMathVista
Accuracy72.3
278
Optical Character Recognition BenchmarkingOCRBench
Accuracy90.7
131
Visual Grounded ReasoningTreeBench
Overall Score50.9
128
Visual Hallucination EvaluationHallusionBench
Accuracy72.5
37
Visually Grounded ReasoningV*Bench
Average Accuracy74.9
32
Multimodal Reasoning and PerceptionRH-Bench
Reasoning Score46.4
3
Showing 7 of 7 rows

Other info

Follow for update