Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Mitigating Entangled Steering in Large Vision-Language Models for Hallucination Reduction

About

Large Vision-Language Models (LVLMs) have achieved remarkable success across cross-modal tasks but remain hindered by hallucinations, producing textual outputs inconsistent with visual content. Existing methods mitigate hallucinations but often alter generation behavior, resulting in shorter outputs and shifted token distributions, especially in latent space steering approaches. We identify that this issue stems from entangled steering signals, where suppressing hallucinations inadvertently disrupts the model's intrinsic generation behavior. To address this, we propose MESA, an effective plug-and-play framework that performs controlled and selective latent intervention for hallucination mitigation. Specifically, MESA targets hallucination-relevant responses while preserving the model's original token distribution, enabling effective hallucination reduction without compromising generation behavior. Extensive experiments across diverse generative and discriminative benchmarks demonstrate that MESA consistently reduces hallucinations while better preserving generation behavior, outperforming prior methods across multiple LVLM families.

Yuanhong Zhang, Zhaoyang Wang, Xin Zhang, Weizhan Zhang, Joey Tianyi Zhou• 2026

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationMS-COCO (POPE Adversarial)
Accuracy84.47
138
Object Hallucination EvaluationMS-COCO POPE (Popular)
Accuracy87.63
108
Object Hallucination EvaluationMS-COCO POPE Random
Accuracy90.27
71
Object Hallucination ProbingGQA POPE Popular
Accuracy86.07
49
Object Hallucination ProbingGQA POPE Random
Accuracy (GQA POPE)89.5
42
Object Hallucination ProbingGQA Adversarial
Accuracy82.73
40
Object Hallucination EvaluationGQA (Random)
Accuracy89.5
28
Object Hallucination EvaluationMSCOCO (Random)
Accuracy90.27
28
Object Hallucination Mitigation on Generative TasksAMBER
CHAIR6.4
22
Object Hallucination ProbingGQA POPE (Adversarial)
Accuracy82.73
19
Showing 10 of 18 rows

Other info

Follow for update