Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Self-Introspective Decoding: Alleviating Hallucinations for Large Vision-Language Models

About

While Large Vision-Language Models (LVLMs) have rapidly advanced in recent years, the prevalent issue known as the `hallucination' problem has emerged as a significant bottleneck, hindering their real-world deployments. Existing methods mitigate this issue mainly from two perspectives: One approach leverages extra knowledge like robust instruction tuning LVLMs with curated datasets or employing auxiliary analysis networks, which inevitable incur additional costs. Another approach, known as contrastive decoding, induces hallucinations by manually disturbing the vision or instruction raw inputs and mitigates them by contrasting the outputs of the disturbed and original LVLMs. However, these approaches rely on empirical holistic input disturbances and double the inference cost. To avoid these issues, we propose a simple yet effective method named Self-Introspective Decoding (SID). Our empirical investigation reveals that pretrained LVLMs can introspectively assess the importance of vision tokens based on preceding vision and text (both instruction and generated) tokens. We develop the Context and Text-aware Token Selection (CT2S) strategy, which preserves only unimportant vision tokens after early layers of LVLMs to adaptively amplify text-informed hallucination during the auto-regressive decoding. This approach ensures that multimodal knowledge absorbed in the early layers induces multimodal contextual rather than aimless hallucinations. Subsequently, the original token logits subtract the amplified vision-and-text association hallucinations, guiding LVLMs decoding faithfully. Extensive experiments illustrate SID generates less-hallucination and higher-quality texts across various metrics, without extra knowledge and much additional computation burdens.

Fushuo Huo, Wenchao Xu, Zhong Zhang, Haozhao Wang, Zhicheng Chen, Peilin Zhao• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVizWiz
Accuracy50.9
1525
Object Hallucination EvaluationPOPE
Accuracy85.8
1455
Multimodal EvaluationMME--
658
Video UnderstandingMVBench
Accuracy65
425
Object HallucinationPOPE Adversarial
Accuracy87.4
288
Visual Mathematical ReasoningMathVista
Accuracy54.3
278
Hallucination EvaluationAMBER
CHAIR6.7
172
Object Hallucination EvaluationMS-COCO (POPE Adversarial)
Accuracy84
138
Object Hallucination EvaluationCHAIR
CS Score44.2
108
Visual Hallucination EvaluationMSCOCO
CHAIR_i11.85
104
Showing 10 of 51 rows

Other info

Follow for update