Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment

About

Existing image-text modality alignment in Vision Language Models (VLMs) treats each text token equally in an autoregressive manner. Despite being simple and effective, this method results in sub-optimal cross-modal alignment by over-emphasizing the text tokens that are less correlated with or even contradictory with the input images. In this paper, we advocate for assigning distinct contributions for each text token based on its visual correlation. Specifically, we present by contrasting image inputs, the difference in prediction logits on each text token provides strong guidance of visual correlation. We therefore introduce Contrastive ALignment (CAL), a simple yet effective re-weighting strategy that prioritizes training visually correlated tokens. Our experimental results demonstrate that CAL consistently improves different types of VLMs across different resolutions and model sizes on various benchmark datasets. Importantly, our method incurs minimal additional computational overhead, rendering it highly efficient compared to alternative data scaling strategies. Codes are available at https://github.com/foundation-multimodal-models/CAL.

Xin Xiao, Bohong Wu, Jiacong Wang, Chunyuan Li, Xun Zhou, Haoyuan Guo• 2024

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE
Accuracy87.5
1455
Visual Question AnsweringTextVQA
Accuracy70.3
1285
Multimodal EvaluationMME
Score1.61e+3
658
OCR EvaluationOCRBench
Score574
329
Multimodal ReasoningMMStar
Accuracy38.5
143
Visual GroundingRefCOCOg (test)--
119
Science Question AnsweringScienceQA SQA-I
Accuracy73.1
103
Text-based Visual Question AnsweringTextVQA (VQA^T)
Accuracy68.8
96
Image CaptioningTextCaps
CIDEr124.7
96
Document-oriented Visual Question AnsweringDocVQA
Accuracy80.1
72
Showing 10 of 14 rows

Other info

Code

Follow for update