Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs

About

Existing Large Vision-Language Models (LVLMs) primarily align image features of vision encoder with Large Language Models (LLMs) to leverage their superior text generation capabilities. However, the scale disparity between vision encoder and language model may led to LLMs assuming a predominant role in multi-modal comprehension. This imbalance in LVLMs may result in the instances of hallucinatory. Concretely, LVLMs may generate consistent descriptions with or without visual input, indicating that certain outputs are influenced solely by context text. We refer to this phenomenon as "text inertia." To counteract this issue, we introduce a training-free algorithm to find an equilibrium point between image comprehension and language inference. Specifically, we adaptively involve adjusting and amplifying the attention weights assigned to image tokens, thereby granting greater prominence to visual elements. Meanwhile, we subtract the logits of multi-modal inputs from ones of pure text input, which can help LVLMs be not biased towards LLMs. By enhancing images tokens and reducing the stubborn output of LLM, we can let LVLM pay more attention to images, towards alleviating text inertia and reducing the hallucination in LVLMs. Our extensive experiments shows that this method substantially reduces the frequency of hallucinatory outputs in various LVLMs in terms of different metrics. Project page is available at https://lalbj.github.io/projects/PAI/.

Shi Liu, Kecheng Zheng, Wei Chen• 2024

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE--
935
Object HallucinationPOPE (Random)
F1 Score85.64
200
Object HallucinationPOPE Adversarial
Accuracy82.83
196
Object HallucinationPOPE Popular
F1 Score83.46
188
Hallucination EvaluationMMHal-Bench
MMHal Score1.78
174
Hallucination EvaluationCHAIR
CHAIR_s54.2
166
Hallucination EvaluationPOPE
Accuracy85.84
132
Vision UnderstandingMMBench
Accuracy65.45
104
Visual UnderstandingMM-Vet
MM-Vet Score27.99
102
Document Visual Question AnsweringDocVQA
Accuracy21.98
81
Showing 10 of 50 rows

Other info

Follow for update