Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Locate-then-Sparsify: Attribution Guided Sparse Strategy for Visual Hallucination Mitigation

About

Despite the significant advancements in Large Vision-Language Models (LVLMs), their tendency to generate hallucinations undermines reliability and restricts broader practical deployment. Among the hallucination mitigation methods, feature steering emerges as a promising approach that reduces erroneous outputs in LVLMs without increasing inference costs. However, current methods apply uniform feature steering across all layers. This heuristic strategy ignores inter-layer differences, potentially disrupting layers unrelated to hallucinations and ultimately leading to performance degradation on general tasks. In this paper, we propose a plug-and-play framework called Locate-Then-Sparsify for Feature Steering (LTS-FS), which controls the steering intensity according to the hallucination relevance of each layer. We first construct a synthetic dataset comprising token-level and sentence-level hallucination cases. Based on this dataset, we introduce an attribution method based on causal interventions to quantify the hallucination relevance of each layer. With the attribution scores across layers, we propose a layerwise strategy that converts these scores into feature steering intensities for individual layers, enabling more precise adjustments specifically on hallucination-relevant layers. Extensive experiments across multiple LVLMs and benchmarks demonstrate that our LTS-FS framework effectively mitigates hallucination while preserving strong performance.

TianTian Dang, Chao Bi, Shufan Shen, Jinzhe Liu, Qingming Huang, Shuhui Wang• 2026

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE
Accuracy79.92
1455
Object HallucinationPOPE (Random)
F1 Score87.64
285
Object HallucinationPOPE Popular
F1 Score83.58
273
Hallucination EvaluationCHAIR
CHAIR_s46.8
252
Object Hallucination EvaluationPOPE Adversarial--
55
Hallucination EvaluationMSCOCO
CS Score46.8
21
Caption Hallucination EvaluationCHAIR
CS Score46.8
20
Object Hallucination EvaluationPOPE GQA
Accuracy77.15
20
Multimodal Assistant EvaluationLLaVA-Bench GPT-4V-aided (full)
Accuracy6.96
6
Generative Capability EvaluationCLAIR
CLAIR Details Score6.23
4
Showing 10 of 10 rows

Other info

Follow for update