Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Reducing Hallucinations in Vision-Language Models via Latent Space Steering

About

Hallucination poses a challenge to the deployment of large vision-language models (LVLMs) in applications. Unlike in large language models (LLMs), hallucination in LVLMs often arises from misalignments between visual inputs and textual outputs. This paper investigates the underlying mechanisms of hallucination, focusing on the unique structure of LVLMs that distinguishes them from large language models (LLMs). We identify that hallucinations often arise from the sensitivity of text decoders to vision inputs, a natural phenomenon when image encoders and text decoders are pre-trained separately. Inspired by this, we introduce Visual and Textual Intervention (VTI), a novel technique designed to reduce hallucinations by steering latent space representations during inference to enhance the stability of vision features. As a task-agnostic test-time intervention, VTI can be easily applied to any problem without additional cost. Extensive experiments demonstrate that it can effectively reduce hallucinations and outperform baseline methods across multiple metrics, highlighting the critical role of vision feature stability in LVLMs.

Sheng Liu, Haotian Ye, Lei Xing, James Zou• 2024

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE--
1455
Multimodal Capability EvaluationMM-Vet
Score43.44
345
Multi-discipline Multimodal UnderstandingMMMU--
317
Object HallucinationPOPE Adversarial
Accuracy86.57
288
Object HallucinationPOPE (Random)
F1 Score88.86
285
Object HallucinationPOPE Popular
F1 Score88.03
273
Hallucination EvaluationCHAIR
CHAIR_s35
252
Hallucination EvaluationMMHal-Bench
MMHal Score3.68
216
Object Hallucination EvaluationMS-COCO (POPE Adversarial)
Accuracy83.53
138
Object Hallucination EvaluationMS-COCO POPE (Popular)
Accuracy86.37
108
Showing 10 of 58 rows

Other info

Follow for update