Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

VACoDe: Visual Augmented Contrastive Decoding

About

Despite the astonishing performance of recent Large Vision-Language Models (LVLMs), these models often generate inaccurate responses. To address this issue, previous studies have focused on mitigating hallucinations by employing contrastive decoding (CD) with augmented images, which amplifies the contrast with the original image. However, these methods have limitations, including reliance on a single augmentation, which is restrictive for certain tasks, as well as the high cost of using external knowledge. In this study, we address these limitations by exploring how to utilize multiple image augmentations. Through extensive experiments, we observed that different augmentations produce varying levels of contrast depending on the task. Based on this observation, we introduce a novel method called VACoDe, Visual Augmented Contrastive Decoding. This method adaptively selects the augmentation with the highest contrast for each task using the proposed softmax distance metric. Our empirical tests show that \alg outperforms previous methods and improves output quality in various vision-language tasks. Additionally, VACoDe can be universally applied across different model types and sizes without additional training or the use of external models and data.

Sihyeon Kim, Boryeong Cho, Sangmin Bae, Sumyeong Ahn, Se-Young Yun• 2024

Related benchmarks

TaskDatasetResultRank
Multimodal Capability EvaluationMM-Vet
Score65.08
345
Visual PerceptionMMVP--
82
Object Hallucination EvaluationPOPE A-OKVQA
Accuracy89.07
75
Multimodal EvaluationLLaVA-Bench In-the-Wild
Score121.1
56
Object Hallucination EvaluationMSCOCO
Accuracy88.97
41
Visual PerceptionMME
Perception Score1.72e+3
28
Multimodal Hallucination EvaluationMMHal-Bench
Average Score4.63
20
Showing 7 of 7 rows

Other info

Follow for update