Mitigating Entangled Steering in Large Vision-Language Models for Hallucination Reduction
About
Large Vision-Language Models (LVLMs) have achieved remarkable success across cross-modal tasks but remain hindered by hallucinations, producing textual outputs inconsistent with visual content. Existing methods mitigate hallucinations but often alter generation behavior, resulting in shorter outputs and shifted token distributions, especially in latent space steering approaches. We identify that this issue stems from entangled steering signals, where suppressing hallucinations inadvertently disrupts the model's intrinsic generation behavior. To address this, we propose MESA, an effective plug-and-play framework that performs controlled and selective latent intervention for hallucination mitigation. Specifically, MESA targets hallucination-relevant responses while preserving the model's original token distribution, enabling effective hallucination reduction without compromising generation behavior. Extensive experiments across diverse generative and discriminative benchmarks demonstrate that MESA consistently reduces hallucinations while better preserving generation behavior, outperforming prior methods across multiple LVLM families.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Object Hallucination Evaluation | MS-COCO (POPE Adversarial) | Accuracy84.47 | 138 | |
| Object Hallucination Evaluation | MS-COCO POPE (Popular) | Accuracy87.63 | 108 | |
| Object Hallucination Evaluation | MS-COCO POPE Random | Accuracy90.27 | 71 | |
| Object Hallucination Probing | GQA POPE Popular | Accuracy86.07 | 49 | |
| Object Hallucination Probing | GQA POPE Random | Accuracy (GQA POPE)89.5 | 42 | |
| Object Hallucination Probing | GQA Adversarial | Accuracy82.73 | 40 | |
| Object Hallucination Evaluation | GQA (Random) | Accuracy89.5 | 28 | |
| Object Hallucination Evaluation | MSCOCO (Random) | Accuracy90.27 | 28 | |
| Object Hallucination Mitigation on Generative Tasks | AMBER | CHAIR6.4 | 22 | |
| Object Hallucination Probing | GQA POPE (Adversarial) | Accuracy82.73 | 19 |