Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ACT Now: Preempting LVLM Hallucinations via Adaptive Context Integration

About

Large Vision-Language Models (LVLMs) frequently suffer from severe hallucination issues. Existing mitigation strategies predominantly rely on isolated, single-step states to enhance visual focus or suppress strong linguistic priors. However, these static approaches neglect dynamic context changes across the generation process and struggles to correct inherited information loss. To address this limitation, we propose Adaptive Context inTegration (ACT), a training-free inference intervention method that mitigates hallucination through the adaptive integration of contextual information. Specifically, we first propose visual context exploration, which leverages spatio-temporal profiling to adaptively amplify attention heads responsible for visual exploration. To further facilitate vision-language alignment, we propose semantic context aggregation that marginalizes potential semantic queries to effectively aggregate visual evidence, thereby resolving the information loss caused by the discrete nature of token prediction. Extensive experiments across diverse LVLMs demonstrate that ACT significantly reduces hallucinations and achieves competitive results on both discriminative and generative benchmarks, acting as a robust and highly adaptable solution without compromising fundamental generation capabilities.

Bei Yan, Yuecong Min, Jie Zhang, Shiguang Shan, Xilin Chen• 2026

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE--
1455
Object Hallucination EvaluationCHAIR
CS Score41.6
108
Hallucination EvaluationMME Hallucination
Existence Score195
39
Hallucination Evaluation (Discriminative)AMBER-d
Accuracy89.2
12
Hallucination Evaluation (Generative)AMBER-g
CHAIR Score4.5
12
Multimodal EvaluationMME
Total Score711.7
12
Showing 6 of 6 rows

Other info

Follow for update