Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Residual Decoding: Mitigating Hallucinations in Large Vision-Language Models via History-Aware Residual Guidance

About

Large Vision-Language Models (LVLMs) can reason effectively from image-text inputs and perform well in various multimodal tasks. Despite this success, they are affected by language priors and often produce hallucinations. Hallucinations denote generated content that is grammatically and syntactically coherent, yet bears no match or direct relevance to actual visual input. To address this problem, we propose Residual Decoding (ResDec). It is a novel training-free method that uses historical information to aid decoding. The method relies on the internal implicit reasoning mechanism and token logits evolution mechanism of LVLMs to correct biases. Extensive experiments demonstrate that ResDec effectively suppresses hallucinations induced by language priors, significantly improves visual grounding, and reduces object hallucinations. In addition to mitigating hallucinations, ResDec also performs exceptionally well on comprehensive LVLM benchmarks, highlighting its broad applicability.

Xinrong Chen, Xu Chu, Yingmin Qiu, Hengyuan Zhang, Jing Xiong, Shiyu Tang, Shuai Liu, Shaokang Yang, Cheng Yang, Hayden Kwok-Hay So, Ngai Wong• 2026

Related benchmarks

TaskDatasetResultRank
Multimodal UnderstandingMM-Vet
MM-Vet Score68.7
418
Multimodal UnderstandingMMBench
Accuracy82.64
367
Science Question AnsweringScienceQA
Accuracy90.48
229
Multimodal UnderstandingMMStar
Accuracy65.47
197
Hallucination EvaluationCHAIR
CHAIR_s47.7
166
Multimodal UnderstandingMME--
158
Visual PerceptionMMVP
Accuracy63.33
47
Multimodal UnderstandingSEEDBench2 Plus
Accuracy70.31
38
Hallucination EvaluationPOPE Random v1.0 (test)
Accuracy91.17
31
Hallucination EvaluationPOPE Popular v1.0 (test)
Accuracy90.34
31
Showing 10 of 13 rows

Other info

Follow for update