Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SILC: Improving Vision Language Pretraining with Self-Distillation

About

Image-Text pretraining on web-scale image caption datasets has become the default recipe for open vocabulary classification and retrieval models thanks to the success of CLIP and its variants. Several works have also used CLIP features for dense prediction tasks and have shown the emergence of open-set abilities. However, the contrastive objective used by these models only focuses on image-text alignment and does not incentivise image feature learning for dense prediction tasks. In this work, we introduce SILC, a novel framework for vision language pretraining. SILC improves image-text contrastive learning with the simple addition of local-to-global correspondence learning by self-distillation. We show that distilling local image features from an exponential moving average (EMA) teacher model significantly improves model performance on dense predictions tasks like detection and segmentation, while also providing improvements on image-level tasks such as classification and retrieval. SILC models sets a new state of the art for zero-shot classification, few shot classification, image and text retrieval, zero-shot segmentation, and open vocabulary segmentation. We further show that SILC features greatly benefit open vocabulary detection, captioning and visual question answering.

Muhammad Ferjad Naeem, Yongqin Xian, Xiaohua Zhai, Lukas Hoyer, Luc Van Gool, Federico Tombari• 2023

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K
mIoU19.3
1024
Semantic segmentationCityscapes
mIoU26.9
658
Semantic segmentationCOCO Stuff
mIoU20.8
379
Semantic segmentationADE20K A-150
mIoU37.7
217
Semantic segmentationPascal Context (test)--
191
Text-to-Image RetrievalCOCO--
156
Image-to-Text RetrievalCOCO--
149
Semantic segmentationPC-59
mIoU31.6
148
Semantic segmentationVOC-20
mIoU77.5
118
Image ClassificationImageNet
Top-1 Accuracy83.7
80
Showing 10 of 22 rows

Other info

Follow for update