Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Compositional Understanding

About

Vision-Language Models (VLMs), such as CLIP, exhibit strong image-text comprehension abilities, facilitating advances in several downstream tasks such as zero-shot image classification, image-text retrieval, and text-to-image generation. However, the compositional reasoning abilities of existing VLMs remains subpar. The root of this limitation lies in the inadequate alignment between the images and captions in the pretraining datasets. Additionally, the current contrastive learning objective fails to focus on fine-grained grounding components like relations, actions, and attributes, resulting in "bag-of-words" representations. We introduce a simple and effective method to improve compositional reasoning in VLMs. Our method better leverages available datasets by refining and expanding the standard image-text contrastive learning framework. Our approach does not require specific annotations and does not incur extra parameters. When integrated with CLIP, our technique yields notable improvement over state-of-the-art baselines across five vision-language compositional benchmarks. We open-source our code at https://github.com/lezhang7/Enhance-FineGrained.

Le Zhang, Rabiul Awal, Aishwarya Agrawal• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100
Accuracy78.1
691
Image ClassificationImageNet-1K
Accuracy94.2
193
Text-to-Image RetrievalMS-COCO--
151
Image-to-Text RetrievalMS-COCO--
132
Aggregate Model PerformanceCombined Benchmark Suite
Average Score60.5
57
Compositional ReasoningSugarCrepe
Overall Accuracy87.5
50
Zero-shot Image ClassificationImageNet-1k (val)
Accuracy40.4
49
Image-Text RetrievalFlickr30k (test)--
45
Image-to-Text RetrievalDOCCI (test)
Recall@130.8
43
Compositional ReasoningVL-Checklist--
37
Showing 10 of 25 rows

Other info

Code

Follow for update