Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Towards Interpretable Hallucination Analysis and Mitigation in LVLMs via Contrastive Neuron Steering

About

LVLMs achieve remarkable multimodal understanding and generation but remain susceptible to hallucinations. Existing mitigation methods predominantly focus on output-level adjustments, leaving the internal mechanisms that give rise to these hallucinations largely unexplored. To gain a deeper understanding, we adopt a representation-level perspective by introducing sparse autoencoders (SAEs) to decompose dense visual embeddings into sparse, interpretable neurons. Through neuron-level analysis, we identify distinct neuron types, including always-on neurons and image-specific neurons. Our findings reveal that hallucinations often result from disruptions or spurious activations of image-specific neurons, while always-on neurons remain largely stable. Moreover, selectively enhancing or suppressing image-specific neurons enables controllable intervention in LVLM outputs, improving visual grounding and reducing hallucinations. Building on these insights, we propose Contrastive Neuron Steering (CNS), which identifies image-specific neurons via contrastive analysis between clean and noisy inputs. CNS selectively amplifies informative neurons while suppressing perturbation-induced activations, producing more robust and semantically grounded visual representations. This not only enhances visual understanding but also effectively mitigates hallucinations. By operating at the prefilling stage, CNS is fully compatible with existing decoding-stage methods. Extensive experiments on both hallucination-focused and general multimodal benchmarks demonstrate that CNS consistently reduces hallucinations while preserving overall multimodal understanding.

Guangtao Lyu, Xinyi Cheng, Qi Liu, Chenghao Xu, Jiexi Yan, Muli Yang, Fen Fang, Cheng Deng• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVizWiz
Accuracy51.82
1043
Multimodal Capability EvaluationMM-Vet
Score32.8
282
Object HallucinationPOPE (Random)
F1 Score89.42
200
Object HallucinationPOPE Adversarial
Accuracy86.12
196
Object HallucinationPOPE Popular
F1 Score86.78
188
Hallucination EvaluationCHAIR
CHAIR_s56.3
166
Generative HallucinationAMBER Generative
CHAIR Score7.1
24
Hallucination AnalysisHallusionBench
fACC18.7
4
Large Vision-Language Model EvaluationMME
Overall Score1.89e+3
4
Multi-modal Instruction FollowingLLaVA-Wild
Average Score66.84
4
Showing 10 of 10 rows

Other info

Follow for update