Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training

About

Vision-Language Models (VLMs) trained with contrastive loss have achieved significant advancements in various vision and language tasks. However, the global nature of the contrastive loss makes VLMs focus predominantly on foreground objects, neglecting other crucial information in the image, which limits their effectiveness in downstream tasks. To address these challenges, we propose COSMOS: CrOSs-MOdality Self-distillation for vision-language pre-training that integrates a novel text-cropping strategy and cross-attention module into a self-supervised learning framework. We create global and local views of images and texts (i.e., multi-modal augmentations), which are essential for self-distillation in VLMs. We further introduce a cross-attention module, enabling COSMOS to learn comprehensive cross-modal representations optimized via a cross-modality self-distillation loss. COSMOS consistently outperforms previous strong baselines on various zero-shot downstream tasks, including retrieval, classification, and semantic segmentation. Additionally, it surpasses CLIP-based models trained on larger datasets in visual perception and contextual understanding tasks. Code is available at https://github.com/ExplainableML/cosmos.

Sanghwan Kim, Rui Xiao, Mariana-Iuliana Georgescu, Stephan Alaniz, Zeynep Akata• 2024

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE
Accuracy83.2
1455
Visual Question AnsweringGQA
Accuracy60.4
1249
Semantic segmentationADE20K
mIoU17.7
1024
Text-based Visual Question AnsweringTextVQA
Accuracy55.3
807
Semantic segmentationCityscapes
mIoU34.7
658
Image ClassificationStanford Cars--
635
Image ClassificationCIFAR-10--
564
Image ClassificationFlowers102
Accuracy52.2
558
Image ClassificationFood-101--
542
Image ClassificationDTD--
542
Showing 10 of 57 rows

Other info

Code

Follow for update