Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Self-Introspective Decoding: Alleviating Hallucinations for Large Vision-Language Models

About

While Large Vision-Language Models (LVLMs) have rapidly advanced in recent years, the prevalent issue known as the `hallucination' problem has emerged as a significant bottleneck, hindering their real-world deployments. Existing methods mitigate this issue mainly from two perspectives: One approach leverages extra knowledge like robust instruction tuning LVLMs with curated datasets or employing auxiliary analysis networks, which inevitable incur additional costs. Another approach, known as contrastive decoding, induces hallucinations by manually disturbing the vision or instruction raw inputs and mitigates them by contrasting the outputs of the disturbed and original LVLMs. However, these approaches rely on empirical holistic input disturbances and double the inference cost. To avoid these issues, we propose a simple yet effective method named Self-Introspective Decoding (SID). Our empirical investigation reveals that pretrained LVLMs can introspectively assess the importance of vision tokens based on preceding vision and text (both instruction and generated) tokens. We develop the Context and Text-aware Token Selection (CT2S) strategy, which preserves only unimportant vision tokens after early layers of LVLMs to adaptively amplify text-informed hallucination during the auto-regressive decoding. This approach ensures that multimodal knowledge absorbed in the early layers induces multimodal contextual rather than aimless hallucinations. Subsequently, the original token logits subtract the amplified vision-and-text association hallucinations, guiding LVLMs decoding faithfully. Extensive experiments illustrate SID generates less-hallucination and higher-quality texts across various metrics, without extra knowledge and much additional computation burdens.

Fushuo Huo, Wenchao Xu, Zhong Zhang, Haozhao Wang, Zhicheng Chen, Peilin Zhao• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVizWiz
Accuracy50.9
1043
Object Hallucination EvaluationPOPE
Accuracy85.8
935
Multimodal EvaluationMME--
557
Video UnderstandingMVBench
Accuracy65
247
Visual Mathematical ReasoningMathVista
Accuracy54.3
189
Visual Hallucination EvaluationMSCOCO
CHAIR_i11.85
104
Object Hallucination EvaluationPOPE Random offline
F1 Score72.84
84
Object Hallucination EvaluationPOPE Popular offline
F1 Score81.9
84
Object Hallucination EvaluationPOPE Adversarial offline
F1 Score68.3
84
Object Hallucination EvaluationMS-COCO (POPE Adversarial)
Accuracy84
80
Showing 10 of 27 rows

Other info

Follow for update