Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Scalpel: Fine-Grained Alignment of Attention Activation Manifolds via Mixture Gaussian Bridges to Mitigate Multimodal Hallucination

About

Rapid progress in large vision-language models (LVLMs) has achieved unprecedented performance in vision-language tasks. However, due to the strong prior of large language models (LLMs) and misaligned attention across modalities, LVLMs often generate outputs inconsistent with visual content - termed hallucination. To address this, we propose \textbf{Scalpel}, a method that reduces hallucination by refining attention activation distributions toward more credible regions. Scalpel predicts trusted attention directions for each head in Transformer layers during inference and adjusts activations accordingly. It employs a Gaussian mixture model to capture multi-peak distributions of attention in trust and hallucination manifolds, and uses entropic optimal transport (equivalent to Schr\"odinger bridge problem) to map Gaussian components precisely. During mitigation, Scalpel dynamically adjusts intervention strength and direction based on component membership and mapping relationships between hallucination and trust activations. Extensive experiments across multiple datasets and benchmarks demonstrate that Scalpel effectively mitigates hallucinations, outperforming previous methods and achieving state-of-the-art performance. Moreover, Scalpel is model- and data-agnostic, requiring no additional computation, only a single decoding step.

Ziqiang Shi, Rujie Liu, Shanshan Yu, Satoshi Munakata, Koichi Shirahata• 2026

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationMS-COCO (POPE Adversarial)
Accuracy85.97
80
Object Hallucination ProbingGQA POPE Popular
Accuracy84.57
33
Object Hallucination ProbingA-OKVQA (Adversarial split)
Accuracy78.4
27
Object Hallucination ProbingGQA POPE Random
Accuracy (GQA POPE)89.93
26
Object Hallucination ProbingGQA Adversarial
Accuracy81
24
Object Hallucination ProbingCOCO POPE Random
Accuracy90.67
17
Object Hallucination ProbingA-OKVQA (Random split)
Accuracy89.87
12
Object Hallucination ProbingOKVQA POPE Popular
Accuracy85
11
Object Hallucination ProbingMS COCO Popular split
Accuracy87.87
5
Showing 9 of 9 rows

Other info

Follow for update