Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DINOv2 Meets Text: A Unified Framework for Image- and Pixel-Level Vision-Language Alignment

About

Self-supervised visual foundation models produce powerful embeddings that achieve remarkable performance on a wide range of downstream tasks. However, unlike vision-language models such as CLIP, self-supervised visual features are not readily aligned with language, hindering their adoption in open-vocabulary tasks. Our method, named dino.txt, unlocks this new ability for DINOv2, a widely used self-supervised visual encoder. We build upon the LiT training strategy, which trains a text encoder to align with a frozen vision model but leads to unsatisfactory results on dense tasks. We propose several key ingredients to improve performance on both global and dense tasks, such as concatenating the [CLS] token with the patch average to train the alignment and curating data using both text and image modalities. With these, we successfully train a CLIP-like model with only a fraction of the computational cost compared to CLIP while achieving state-of-the-art results in zero-shot classification and open-vocabulary semantic segmentation.

Cijo Jose, Th\'eo Moutakanni, Dahyun Kang, Federico Baldassarre, Timoth\'ee Darcet, Hu Xu, Daniel Li, Marc Szafraniec, Micha\"el Ramamonjisoa, Maxime Oquab, Oriane Sim\'eoni, Huy V. Vo, Patrick Labatut, Piotr Bojanowski• 2024

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K
mIoU25.1
936
Image ClassificationImageNet-1K
Top-1 Acc81.6
836
Semantic segmentationCityscapes
mIoU41
578
Image ClassificationImageNet A
Top-1 Acc83.2
553
Image ClassificationImageNet V2
Top-1 Acc75.9
487
Semantic segmentationCOCO Stuff
mIoU24.1
195
Image ClassificationObjectNet
Top-1 Accuracy74.5
177
Semantic segmentationPascal Context 59
mIoU36.7
164
Image ClassificationPlaces205
Top-1 Accuracy61.2
55
Verb recognitionEpic-Kitchens (EK)
Top-1 Acc1.4
22
Showing 10 of 16 rows

Other info

Follow for update