Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Revealing Perception and Generation Dynamics in LVLMs: Mitigating Hallucinations via Validated Dominance Correction

About

Large Vision-Language Models (LVLMs) have shown remarkable capabilities, yet hallucinations remain a persistent challenge. This work presents a systematic analysis of the internal evolution of visual perception and token generation in LVLMs, revealing two key patterns. First, perception follows a three-stage GATE process: early layers perform a Global scan, intermediate layers Approach and Tighten on core content, and later layers Explore supplementary regions. Second, generation exhibits an SAD (Subdominant Accumulation to Dominant) pattern, where hallucinated tokens arise from the repeated accumulation of subdominant tokens lacking support from attention (visual perception) or feed-forward network (internal knowledge). Guided by these findings, we devise the VDC (Validated Dominance Correction) strategy, which detects unsupported tokens and replaces them with validated dominant ones to improve output reliability. Extensive experiments across multiple models and benchmarks confirm that VDC substantially mitigates hallucinations.

Guangtao Lyu, Xinyi Cheng, Chenghao Xu, Qi Liu, Muli Yang, Fen Fang, Huilin Chen, Jiexi Yan, Xu Yang, Cheng Deng• 2025

Related benchmarks

TaskDatasetResultRank
Visual Hallucination EvaluationMSCOCO
CHAIR_i9.8
104
Object HallucinationPOPE v1.0 (Random)
Accuracy90.07
24
Object HallucinationPOPE Popular v1.0
Accuracy88.03
24
Object HallucinationPOPE Adversarial v1.0
Accuracy84.4
24
Showing 4 of 4 rows

Other info

Follow for update