Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

HTDC: Hesitation-Triggered Differential Calibration for Mitigating Hallucination in Large Vision-Language Models

About

Large vision-language models (LVLMs) achieve strong multimodal performance, but still suffer from hallucinations caused by unstable visual grounding and over-reliance on language priors. Existing training-free decoding methods typically apply calibration at every decoding step, introducing unnecessary computation and potentially disrupting stable predictions. We address this problem by identifying layer-wise hesitation, a simple signal of grounding instability reflected by fluctuations in token preference across intermediate layers. Based on this observation, we propose Hesitation-Triggered Differential Calibration (HTDC), a training-free decoding framework that preserves standard full-branch inference and activates calibration only at hesitation-prone steps. When triggered, HTDC contrasts the full branch with two lightweight probes, a visual-nullification probe and a semantic-nullification probe, to suppress hallucination-prone candidates while avoiding unnecessary intervention on stable steps. Experiments on representative hallucination benchmarks show that HTDC consistently reduces hallucinations while maintaining strong task accuracy, achieving a favorable trade-off between effectiveness and computational overhead.

Xinyun Liu• 2026

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationMSCOCO POPE
Random Accuracy91.24
47
Object Hallucination MitigationCHAIR
CHAIRs Score11.6
22
Large Vision-Language Model EvaluationMME
Perception Score1.71e+3
12
Object Hallucination EvaluationGQA POPE
Accuracy (Random)92.87
12
Showing 4 of 4 rows

Other info

Follow for update