Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Self-Correcting Decoding with Generative Feedback for Mitigating Hallucinations in Large Vision-Language Models

About

While recent Large Vision-Language Models (LVLMs) have shown remarkable performance in multi-modal tasks, they are prone to generating hallucinatory text responses that do not align with the given visual input, which restricts their practical applicability in real-world scenarios. In this work, inspired by the observation that the text-to-image generation process is the inverse of image-conditioned response generation in LVLMs, we explore the potential of leveraging text-to-image generative models to assist in mitigating hallucinations in LVLMs. We discover that generative models can offer valuable self-feedback for mitigating hallucinations at both the response and token levels. Building on this insight, we introduce self-correcting Decoding with Generative Feedback (DeGF), a novel training-free algorithm that incorporates feedback from text-to-image generative models into the decoding process to effectively mitigate hallucinations in LVLMs. Specifically, DeGF generates an image from the initial response produced by LVLMs, which acts as an auxiliary visual reference and provides self-feedback to verify and correct the initial response through complementary or contrastive decoding. Extensive experimental results validate the effectiveness of our approach in mitigating diverse types of hallucinations, consistently surpassing state-of-the-art methods across six benchmarks. Code is available at https://github.com/zhangce01/DeGF.

Ce Zhang, Zifu Wan, Zhehan Kan, Martin Q. Ma, Simon Stepputtis, Deva Ramanan, Russ Salakhutdinov, Louis-Philippe Morency, Katia Sycara, Yaqi Xie• 2025

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE--
1455
Object Hallucination EvaluationMS-COCO (POPE Adversarial)
Accuracy83.47
138
Object Hallucination EvaluationMS-COCO POPE (Popular)
Accuracy86.5
108
Object Hallucination EvaluationCHAIR
CS Score24
108
Object Hallucination EvaluationMS-COCO POPE Random
Accuracy89.73
71
Object Hallucination EvaluationA-OKVQA POPE Popular
Accuracy86.47
52
Object Hallucination EvaluationPOPE GQA Popular
Accuracy82.1
46
Object Hallucination ProbingGQA POPE Random
Accuracy (GQA POPE)87.09
42
Object Hallucination EvaluationA-OKVQA POPE Random
Accuracy87.9
36
Object Hallucination AssessmentA-OKVQA POPE (Adversarial)
Accuracy0.8075
18
Showing 10 of 15 rows

Other info

Follow for update