Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding

About

Large Vision-Language Models (LVLMs) have advanced considerably, intertwining visual recognition and language understanding to generate content that is not only coherent but also contextually attuned. Despite their success, LVLMs still suffer from the issue of object hallucinations, where models generate plausible yet incorrect outputs that include objects that do not exist in the images. To mitigate this issue, we introduce Visual Contrastive Decoding (VCD), a simple and training-free method that contrasts output distributions derived from original and distorted visual inputs. The proposed VCD effectively reduces the over-reliance on statistical bias and unimodal priors, two essential causes of object hallucinations. This adjustment ensures the generated content is closely grounded to visual inputs, resulting in contextually accurate outputs. Our experiments show that VCD, without either additional training or the usage of external tools, significantly mitigates the object hallucination issue across different LVLM families. Beyond mitigating object hallucinations, VCD also excels in general LVLM benchmarks, highlighting its wide-ranging applicability.

Sicong Leng, Hang Zhang, Guanzheng Chen, Xin Li, Shijian Lu, Chunyan Miao, Lidong Bing• 2023

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringTextVQA
Accuracy61.18
1117
Visual Question AnsweringVizWiz
Accuracy45.62
1043
Object Hallucination EvaluationPOPE
Accuracy85
935
Multimodal EvaluationMME--
557
Multimodal UnderstandingMM-Vet
MM-Vet Score64.7
418
Multimodal UnderstandingMMBench
Accuracy69.23
367
Visual Question AnsweringOK-VQA (test)
Accuracy65.55
296
Multimodal Capability EvaluationMM-Vet
Score47.16
282
Video UnderstandingMVBench
Accuracy61
247
Science Question AnsweringScienceQA
Accuracy90.08
229
Showing 10 of 195 rows
...

Other info

Follow for update