Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Self-Aug: Query and Entropy Adaptive Decoding for Large Vision-Language Models

About

Large Vision-Language Models (LVLMs) have demonstrated remarkable multimodal capabilities, but they inherit the tendency to hallucinate from their underlying language models. While visual contrastive decoding has been proposed to mitigate this issue, existing methods often apply generic visual augmentations that disregard the specific context provided by the text query, limiting their effectiveness. This study introduces a novel training-free decoding strategy that addresses these limitations, featuring two key contributions. First, a self-augmentation prompting strategy that leverages the intrinsic knowledge of the model to dynamically align semantics between the query and the visual augmentation. Second, an adaptive thresholding algorithm that adaptively adjusts next token candidate size based on the output sparsity, utilizing full information from the logit distribution. Extensive experiments across four LVLMs and seven benchmarks demonstrate that the proposed decoding significantly enhances factual consistency compared to state-of-the-art decoding methods. This work highlights the importance of integrating query-dependent augmentation and entropy-aware decoding for improving effective generation of LVLMs.

Eun Woo Im, Muhammad Kashif Ali, Vivek Gupta• 2025

Related benchmarks

TaskDatasetResultRank
Multimodal Capability EvaluationMM-Vet
Score64.5
345
Visual PerceptionMMVP--
82
Object Hallucination EvaluationPOPE A-OKVQA
Accuracy88.68
75
Multimodal EvaluationLLaVA-Bench In-the-Wild
Score121.9
56
Object Hallucination EvaluationMSCOCO
Accuracy88.79
41
Visual PerceptionMME
Perception Score1.73e+3
28
Multimodal Hallucination EvaluationMMHal-Bench
Average Score4.67
20
Showing 7 of 7 rows

Other info

Follow for update