Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

UniCLIP: Unified Framework for Contrastive Language-Image Pre-training

About

Pre-training vision-language models with contrastive objectives has shown promising results that are both scalable to large uncurated datasets and transferable to many downstream applications. Some following works have targeted to improve data efficiency by adding self-supervision terms, but inter-domain (image-text) contrastive loss and intra-domain (image-image) contrastive loss are defined on individual spaces in those works, so many feasible combinations of supervision are overlooked. To overcome this issue, we propose UniCLIP, a Unified framework for Contrastive Language-Image Pre-training. UniCLIP integrates the contrastive loss of both inter-domain pairs and intra-domain pairs into a single universal space. The discrepancies that occur when integrating contrastive loss between different domains are resolved by the three key components of UniCLIP: (1) augmentation-aware feature embedding, (2) MP-NCE loss, and (3) domain dependent similarity measure. UniCLIP outperforms previous vision-language pre-training methods on various single- and multi-modality downstream tasks. In our experiments, we show that each component that comprises UniCLIP contributes well to the final performance.

Janghyeon Lee, Jongsuk Kim, Hyounguk Shon, Bumsoo Kim, Seung Hwan Kim, Honglak Lee, Junmo Kim• 2022

Related benchmarks

TaskDatasetResultRank
Image ClassificationStanford Cars
Accuracy19.5
635
Image ClassificationFood-101
Accuracy64.6
542
Image ClassificationDTD
Accuracy36.6
542
Image ClassificationSUN397
Accuracy61.1
425
Image ClassificationCIFAR100
Accuracy56.5
347
Image ClassificationOxford-IIIT Pets
Accuracy69.2
306
Image ClassificationCIFAR10
Accuracy87.8
240
Image ClassificationOxford Flowers 102
Accuracy8
234
Image RetrievalFlickr30k (test)
R@134.8
210
Image ClassificationFGVC Aircraft--
203
Showing 10 of 14 rows

Other info

Follow for update