Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Vision-Language Introspection: Mitigating Overconfident Hallucinations in MLLMs via Interpretable Bi-Causal Steering

About

Object hallucination critically undermines the reliability of Multimodal Large Language Models, often stemming from a fundamental failure in cognitive introspection, where models blindly trust linguistic priors over specific visual evidence. Existing mitigations remain limited: contrastive decoding approaches operate superficially without rectifying internal semantic misalignments, while current latent steering methods rely on static vectors that lack instance-specific precision. We introduce Vision-Language Introspection (VLI), a training-free inference framework that simulates a metacognitive self-correction process. VLI first performs Attributive Introspection to diagnose hallucination risks via probabilistic conflict detection and localize the causal visual anchors. It then employs Interpretable Bi-Causal Steering to actively modulate the inference process, dynamically isolating visual evidence from background noise while neutralizing blind confidence through adaptive calibration. VLI achieves state-of-the-art performance on advanced models, reducing object hallucination rates by 12.67% on MMHal-Bench and improving accuracy by 5.8% on POPE.

Shuliang Liu, Songbo Yang, Dong Fang, Sihang Jia, Yuqi Tang, Lingfeng Su, Ruoshui Peng, Yibo Yan, Xin Zou, Xuming Hu• 2026

Related benchmarks

TaskDatasetResultRank
Object HallucinationPOPE Adversarial
Accuracy86.7
288
Hallucination EvaluationMMHal-Bench
MMHal Score4.32
216
Object Hallucination EvaluationPOPE A-OKVQA
Accuracy89.23
75
Object Hallucination EvaluationPOPE MSCOCO
Accuracy92.58
55
Object Hallucination EvaluationPOPE Adversarial
Accuracy0.854
55
Image CaptioningPOPE Adversarial
CIDEr119.4
50
Object Hallucination EvaluationPOPE GQA
Accuracy86.47
20
Showing 7 of 7 rows

Other info

Follow for update