Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Mask What Matters: Mitigating Object Hallucinations in Multimodal Large Language Models with Object-Aligned Visual Contrastive Decoding

About

We study object hallucination in Multimodal Large Language Models (MLLMs) and improve visual contrastive decoding (VCD) by constructing an object-aligned auxiliary view. We leverage object-centric attention in self-supervised Vision Transformers. In particular, we remove the most salient visual evidence to construct an auxiliary view that disrupts unsupported tokens and produces a stronger contrast signal. Our method is prompt-agnostic, model-agnostic, and can be seamlessly plugged into the existing VCD pipeline with little computation overhead, i.e., a single cacheable forward pass. Empirically, our method demonstrates consistent gains on two popular object hallucination benchmarks across two MLLMs.

Boqi Chen, Xudong Liu, Jianing Qiu• 2026

Related benchmarks

TaskDatasetResultRank
Object HallucinationPOPE (Random)
F1 Score86.5
200
Object HallucinationPOPE Adversarial
Accuracy82.9
196
Object HallucinationPOPE Popular
F1 Score84.3
188
Hallucination EvaluationPOPE Random v1.0 (test)
Accuracy89.5
31
Hallucination EvaluationPOPE Popular v1.0 (test)
Accuracy85.7
31
Hallucination EvaluationPOPE Adversarial v1.0 (test)
Accuracy81.9
31
Showing 6 of 6 rows

Other info

Follow for update