Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Residual Decoding: Mitigating Hallucinations in Large Vision-Language Models via History-Aware Residual Guidance

About

Large Vision-Language Models (LVLMs) can reason from image-text inputs and perform well in various multimodal tasks. Despite this success, they are affected by language priors and often produce hallucinations. Hallucinations denote generated content that is grammatically and syntactically coherent, yet bears no match or direct relevance to visual input. To address this problem, we propose Residual Decoding (ResDec). It is a novel training-free method that uses historical information to aid decoding. The method relies on the internal implicit reasoning mechanism and token logits evolution mechanism of LVLMs to correct biases. Extensive experiments demonstrate that ResDec effectively suppresses hallucinations induced by language priors, significantly improves visual grounding, and reduces object hallucinations. In addition to mitigating hallucinations, ResDec also performs exceptionally well on comprehensive LVLM benchmarks, highlighting its broad applicability.

Xinrong Chen, Xu Chu, Yingmin Qiu, Hengyuan Zhang, Jing Xiong, Shiyu Tang, Shuai Liu, Shaokang Yang, Cheng Yang, Hayden Kwok-Hay So, Ngai Wong• 2026

Related benchmarks

TaskDatasetResultRank
Multimodal UnderstandingMMBench
Accuracy82.64
637
Multimodal UnderstandingMM-Vet
MM-Vet Score68.7
531
Science Question AnsweringScienceQA
Accuracy90.48
502
Multimodal UnderstandingMMStar
Accuracy65.47
324
Hallucination EvaluationCHAIR
CHAIR_s47.7
252
Multimodal UnderstandingMME--
207
Visual PerceptionMMVP
Accuracy63.33
82
Multimodal UnderstandingSEEDBench2 Plus
Accuracy70.31
74
Multimodal UnderstandingLLaVA-Bench
Overall Score91.9
72
Hallucination assessmentHallusionBench
Answer Accuracy (aAcc)71.6
39
Showing 10 of 13 rows

Other info

Follow for update