Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Evaluating Object Hallucination in Large Vision-Language Models

About

Inspired by the superior language abilities of large language models (LLM), large vision-language models (LVLM) have been recently explored by integrating powerful LLMs for improving the performance on complex multimodal tasks. Despite the promising progress on LVLMs, we find that LVLMs suffer from the hallucination problem, i.e. they tend to generate objects that are inconsistent with the target images in the descriptions. To investigate it, this work presents the first systematic study on object hallucination of LVLMs. We conduct the evaluation experiments on several representative LVLMs, and show that they mostly suffer from severe object hallucination issue. We further discuss that the visual instructions may influence the hallucination, and find that: objects that frequently occur in the visual instructions or co-occur with the image objects, are obviously prone to be hallucinated by LVLMs. Besides, we find that existing evaluation methods might be affected by the input instructions and generation styles of LVLMs. Thus, we further design an improved evaluation method for object hallucination by proposing a polling-based query method called POPE. Experiment results demonstrate that our POPE can evaluate the object hallucination in a more stable and flexible way. Our codes and data are publicly available at https://github.com/RUCAIBox/POPE.

Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, Ji-Rong Wen• 2023

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE--
1455
Visual Question AnsweringVQA v2
Accuracy76.1
1362
Visual Question AnsweringTextVQA
Accuracy54.8
1285
Multimodal Capability EvaluationMM-Vet
Score28.5
345
Object HallucinationPOPE Adversarial
Accuracy65.17
288
Object HallucinationPOPE (Random)
F1 Score80.17
285
Object HallucinationPOPE Popular
F1 Score73.02
273
Hallucination EvaluationMMHal-Bench
MMHal Score1.64
216
Visual Question AnsweringGQA
Mean Accuracy60.2
196
Visual Question AnsweringGQA
Score44.5
193
Showing 10 of 23 rows

Other info

Follow for update