Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Looking Back and Forth: Cross-Image Attention Calibration and Attentive Preference Learning for Multi-Image Hallucination Mitigation

About

Although large vision-language models (LVLMs) have demonstrated remarkable capabilities, they are prone to hallucinations in multi-image tasks. We attribute this issue to limitations in existing attention mechanisms and insufficient cross-image modeling. Inspired by this, we propose a structured hallucination mitigation framework involving Cross-Image Attention calibration and Preference Learning (CAPL). CAPL explicitly enhances inter-image interactions at the architectural level while reinforcing reliance on genuine cross-image evidence during training, thereby improving the model's perception and modeling of cross-image associations. Specifically, we (i) introduce a selectable image token interaction attention mechanism to establish fine-grained cross-image entity alignment and information flow; (ii) design a cross-image modeling-based preference optimization strategy that contrasts reasoning outcomes under full inter-image interaction and those obtained when images are mutually invisible, encouraging the model to ground its predictions in authentic visual evidence and mitigating erroneous inferences driven by textual priors. Experimental results demonstrate that CAPL consistently improves performance across multiple model architectures, achieving stable gains on both multi-image hallucination and general benchmarks. Notably, performance on single-image visual tasks remains stable or slightly improves, indicating strong generalization capability.

Xiaochen Yang, Hao Fang, Jiawei Kong, Yaoxin Mao, Bin Chen, Shu-Tao Xia• 2026

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE--
1455
Hallucination EvaluationCHAIR
CHAIR_s28.6
252
Multimodal ReasoningMMBench
Overall Score87.48
78
Multi-image ReasoningMIRB
Accuracy60.06
70
Multi-image UnderstandingQBench2
Accuracy75.3
30
Multi-modal Hallucination EvaluationAMBER
Mean Accuracy89.79
22
Multi-image UnderstandingMIBench
Accuracy71.7
22
Multi-image hallucination evaluationBLINK
Accuracy61.33
12
Multi-image hallucination evaluationMuirBench
Accuracy62
12
Multi-image reasoning and general capability evaluationNLVR2
Accuracy90.13
12
Showing 10 of 10 rows

Other info

Follow for update