Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Seeing Far and Clearly: Mitigating Hallucinations in MLLMs with Attention Causal Decoding

About

Recent advancements in multimodal large language models (MLLMs) have significantly improved performance in visual question answering. However, they often suffer from hallucinations. In this work, hallucinations are categorized into two main types: initial hallucinations and snowball hallucinations. We argue that adequate contextual information can be extracted directly from the token interaction process. Inspired by causal inference in the decoding strategy, we propose to leverage causal masks to establish information propagation between multimodal tokens. The hypothesis is that insufficient interaction between those tokens may lead the model to rely on outlier tokens, overlooking dense and rich contextual cues. Therefore, we propose to intervene in the propagation process by tackling outlier tokens to enhance in-context inference. With this goal, we present FarSight, a versatile plug-and-play decoding strategy to reduce attention interference from outlier tokens merely by optimizing the causal mask. The heart of our method is effective token propagation. We design an attention register structure within the upper triangular matrix of the causal mask, dynamically allocating attention to capture attention diverted to outlier tokens. Moreover, a positional awareness encoding method with a diminishing masking rate is proposed, allowing the model to attend to further preceding tokens, especially for video sequence tasks. With extensive experiments, FarSight demonstrates significant hallucination-mitigating performance across different MLLMs on both image and video benchmarks, proving its effectiveness.

Feilong Tang, Chengzhi Liu, Zhongxing Xu, Ming Hu, Zelin Peng, Zhiwei Yang, Jionglong Su, Minquan Lin, Yifan Peng, Xuelian Cheng, Imran Razzak, Zongyuan Ge• 2025

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVizWiz
Accuracy50.8
1043
Object Hallucination EvaluationPOPE--
935
Multimodal ConversationLLaVA-Bench Wild
Score74.7
52
Object Hallucination EvaluationCHAIR
CS Score41.6
49
Object Hallucination EvaluationMSCOCO CHAIR
CHAIR_S41.6
27
Visual Question AnsweringScienceQA (SQA)
SQA Accuracy67.4
27
Visual Question AnsweringMM-Vet
MM-Vet ASR Accuracy32.5
27
Object Existence Hallucination EvaluationPOPE average across three subsets
Accuracy86.1
16
Showing 8 of 8 rows

Other info

Follow for update