Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Tell Model Where to Look: Mitigating Hallucinations in MLLMs by Vision-Guided Attention

About

Visual attention serves as the primary mechanism through which MLLMs interpret visual information; however, its limited localization capability often leads to hallucinations. We observe that although MLLMs can accurately extract visual semantics from visual tokens, they fail to fully leverage this advantage during subsequent inference. To address this limitation, we propose Vision-Guided Attention (VGA), a training-free method that first constructs precise visual grounding by exploiting the semantic content of visual tokens, and then uses this grounding to guide the model's focus toward relevant visual regions. In image captioning, VGA further refines this guidance dynamically during generation by suppressing regions that have already been described. In VGA, each token undergoes only a single forward pass, introducing a negligible latency overhead of just 4.36\%. In addition, VGA is fully compatible with efficient attention implementations such as FlashAttention. Extensive experiments across diverse MLLMs and multiple hallucination benchmarks demonstrate that VGA achieves state-of-the-art dehallucination performance. Further analysis confirms that explicit visual guidance plays a crucial role in enhancing the visual understanding capabilities of MLLMs.

Jianfei Zhao, Feng Zhang, Xin Sun, Chong Feng, Zhixing Tan• 2025

Related benchmarks

TaskDatasetResultRank
Hallucination assessmentAMBER (test)
CHAIR7.36
38
Object Hallucination EvaluationCHAIR (val)
CHAIRs Score58.4
15
Hallucination MitigationSHR
HSR22.7
15
Hallucination EvaluationPOPE Popular MSCOCO, A-OKVQA, GQA average
Accuracy87.46
15
Hallucination EvaluationPOPE MSCOCO, A-OKVQA, GQA average (Adversarial)
Accuracy83.05
15
Hallucination EvaluationPOPE MSCOCO, A-OKVQA, GQA average (Random)
Accuracy92.82
15
Showing 6 of 6 rows

Other info

Follow for update