Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SAVE: Sparse Autoencoder-Driven Visual Information Enhancement for Mitigating Object Hallucination

About

Although Multimodal Large Language Models (MLLMs) have advanced substantially, they remain vulnerable to object hallucination caused by language priors and visual information loss. To address this, we propose SAVE (Sparse Autoencoder-Driven Visual Information Enhancement), a framework that mitigates hallucination by steering the model along Sparse Autoencoder (SAE) latent features. A binary object-presence question-answering probe identifies the SAE features most indicative of the model's visual information processing, referred to as visual understanding features. Steering the model along these identified features reinforces grounded visual understanding and effectively reduces hallucination. With its simple design, SAVE outperforms state-of-the-art training-free methods on standard benchmarks, achieving a 10\%p improvement in CHAIR\_S and consistent gains on POPE and MMHal-Bench. Extensive evaluations across multiple models and layers confirm the robustness and generalizability of our approach. Further analysis reveals that steering along visual understanding features suppresses the generation of uncertain object tokens and increases attention to image tokens, mitigating hallucination. Code is released at https://github.com/wiarae/SAVE.

Sangha Park, Seungryong Yoo, Jisoo Mok, Sungroh Yoon• 2025

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE--
935
Hallucination EvaluationMMHal-Bench
MMHal Score3.7
174
Object Hallucination EvaluationCHAIR
CS Score28
49
Visual Question Answering (Multi-choice)A-OKVQA (test)
Accuracy70.04
19
Inference EfficiencyLLaVA-NeXT Inference
Inference Time (s)8.863
6
Showing 5 of 5 rows

Other info

Follow for update