Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

TIPSv2: Advancing Vision-Language Pretraining with Enhanced Patch-Text Alignment

About

Recent progress in vision-language pretraining has enabled significant improvements to many downstream computer vision applications, such as classification, retrieval, segmentation and depth prediction. However, a fundamental capability that these models still struggle with is aligning dense patch representations with text embeddings of corresponding concepts. In this work, we investigate this critical issue and propose novel techniques to enhance this capability in foundational vision-language models. First, we reveal that a patch-level distillation procedure significantly boosts dense patch-text alignment -- surprisingly, the patch-text alignment of the distilled student model strongly surpasses that of the teacher model. This observation inspires us to consider modifications to pretraining recipes, leading us to propose iBOT++, an upgrade to the commonly-used iBOT masked image objective, where unmasked tokens also contribute directly to the loss. This dramatically enhances patch-text alignment of pretrained models. Additionally, to improve vision-language pretraining efficiency and effectiveness, we modify the exponential moving average setup in the learning recipe, and introduce a caption sampling strategy to benefit from synthetic captions at different granularities. Combining these components, we develop TIPSv2, a new family of image-text encoder models suitable for a wide range of downstream applications. Through comprehensive experiments on 9 tasks and 20 datasets, we demonstrate strong performance, generally on par with or better than recent vision encoder models. Code and models are released via our project page at https://gdm-tipsv2.github.io/ .

Bingyi Cao, Koert Chen, Kevis-Kokitsi Maninis, Kaifeng Chen, Arjun Karpur, Ye Xia, Sahil Dua, Tanmaya Dabral, Guangxing Han, Bohyung Han, Joshua Ainslie, Alex Bewley, Mithun Jacob, Ren\'e Wagner, Washington Ramos, Krzysztof Choromanski, Mojtaba Seyedhosseini, Howard Zhou, Andr\'e Araujo• 2026

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K
mIoU51.6
366
Depth EstimationNYU Depth V2--
209
Text-to-Image RetrievalCOCO--
156
Image-to-Text RetrievalCOCO--
149
Semantic segmentationPC-59
mIoU37.1
148
Semantic segmentationVOC21
mIoU44.4
108
Image ClassificationImageNet
Top-1 Accuracy86.8
80
Image-to-Text RetrievalFlickr
R@195.1
45
Text-to-Image RetrievalFlickr--
40
Image-to-Text RetrievalDOCCI--
38
Showing 10 of 18 rows

Other info

Follow for update