Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Analyzing and Mitigating Object Hallucination in Large Vision-Language Models

About

Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.

Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao• 2023

Related benchmarks

TaskDatasetResultRank
Object HallucinationPOPE (Random)
F1 Score89.7
200
Object HallucinationPOPE Adversarial
Accuracy87
196
Object HallucinationPOPE Popular
F1 Score87.16
188
Visual Hallucination EvaluationMSCOCO
CHAIR_i11.8
104
Object Hallucination EvaluationPOPE Random offline
F1 Score60.08
84
Object Hallucination EvaluationPOPE Popular offline
F1 Score58.63
84
Object Hallucination EvaluationPOPE Adversarial offline
F1 Score58.34
84
Image CaptioningMS-COCO 2014 (test)--
43
Hallucination EvaluationMSCOCO (val)
CHAIR_i17.85
36
Object Hallucination MitigationMSCOCO 2014 (val)
CHAIR Specificity Score27.88
27
Showing 10 of 12 rows

Other info

Follow for update