Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Watch Closely: Mitigating Object Hallucinations in Large Vision-Language Models with Disentangled Decoding

About

Large Vision-Language Models (LVLMs) bridge the gap between visual and linguistic modalities, demonstrating strong potential across a variety of domains. However, despite significant progress, LVLMs still suffer from severe hallucination issues in object recognition tasks. These models often fail to accurately identify certain objects, leading to text generation that appears fluent but does not correspond to the visual content, which can have serious consequences in real-world applications. Recently, several methods have been proposed to alleviate LVLM hallucinations, but most focus solely on reducing hallucinations in the language modality. To mitigate hallucinations in both the language and visual modalities, we introduce Hallucination Disentangled Decoding (HDD) method that requires no training. HDD enhances the original image by segmenting it and selecting images that augment the original, while also utilizing a blank image to eliminate language prior hallucinations in both the original and segmented images. This design not only reduces the model's dependence on language priors but also enhances its visual performance. (Code: https://github.com/rickeyhhh/Hallucination-Disentangled-Decoding)

Ruiqi Ma, Yu Yan, Chunhong Zhang, Minghao Yin, XinChao Liu, Zhihong Jin, Zheng Hu• 2025

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationMS-COCO (POPE Adversarial)
Accuracy84.3
80
Object Hallucination AssessmentMSCOCO
CHAIR Instance Score6.3
38
Object Hallucination EvaluationA-OKVQA POPE Random
Accuracy89.5
36
Object Hallucination EvaluationA-OKVQA POPE Popular
Accuracy86.8
36
Object Hallucination EvaluationPOPE GQA Popular
Accuracy86.8
30
Object Hallucination ProbingA-OKVQA (Adversarial split)
Accuracy79.1
27
Object Hallucination ProbingGQA Adversarial
Accuracy78.4
24
Object Hallucination EvaluationMSCOCO (Random)
Accuracy91.5
12
Object Hallucination EvaluationMSCOCO Popular
Accuracy89.2
12
Object Hallucination EvaluationGQA (Random)
Accuracy89.3
12
Showing 10 of 10 rows

Other info

Follow for update