Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

TripletCLIP: Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives

About

Contrastive Language-Image Pretraining (CLIP) models maximize the mutual information between text and visual modalities to learn representations. This makes the nature of the training data a significant factor in the efficacy of CLIP for downstream tasks. However, the lack of compositional diversity in contemporary image-text datasets limits the compositional reasoning ability of CLIP. We show that generating ``hard'' negative captions via in-context learning and synthesizing corresponding negative images with text-to-image generators offers a solution. We introduce a novel contrastive pre-training strategy that leverages these hard negative captions and images in an alternating fashion to train CLIP. We demonstrate that our method, named TripletCLIP, when applied to existing datasets such as CC3M and CC12M, enhances the compositional capabilities of CLIP, resulting in an absolute improvement of over 9% on the SugarCrepe benchmark on an equal computational budget, as well as improvements in zero-shot image classification and image retrieval. Our code, models, and data are available at: https://tripletclip.github.io

Maitreya Patel, Abhiram Kusumba, Sheng Cheng, Changhoon Kim, Tejas Gokhale, Chitta Baral, Yezhou Yang• 2024

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1K
Top-1 Acc45.92
600
Text-to-Image RetrievalFlickr30K
R@128
531
Text-to-Image RetrievalFlickr30k (test)--
445
Image-to-Text RetrievalFlickr30K
R@125.28
429
Image-to-Text RetrievalFlickr30k (test)--
392
Image ClassificationImageNet-1k (val)
Top-1 Acc23.31
188
Object DetectionCOCO
mAP25.08
137
Image-to-Text RetrievalMSCOCO
R@111.38
129
Text-to-Image RetrievalMSCOCO
R@114.6
123
Image ClassificationVTAB
Overall Accuracy20.81
103
Showing 10 of 42 rows

Other info

Code

Follow for update