Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

TIPS: Text-Image Pretraining with Spatial awareness

About

While image-text representation learning has become very popular in recent years, existing models tend to lack spatial awareness and have limited direct applicability for dense understanding tasks. For this reason, self-supervised image-only pretraining is still the go-to method for many dense vision applications (e.g. depth estimation, semantic segmentation), despite the lack of explicit supervisory signals. In this paper, we close this gap between image-text and self-supervised learning, by proposing a novel general-purpose image-text model, which can be effectively used off the shelf for dense and global vision tasks. Our method, which we refer to as Text-Image Pretraining with Spatial awareness (TIPS), leverages two simple and effective insights. First, on textual supervision: we reveal that replacing noisy web image captions by synthetically generated textual descriptions boosts dense understanding performance significantly, due to a much richer signal for learning spatially aware representations. We propose an adapted training method that combines noisy and synthetic captions, resulting in improvements across both dense and global understanding tasks. Second, on the learning technique: we propose to combine contrastive image-text learning with self-supervised masked image modeling, to encourage spatial coherence, unlocking substantial enhancements for downstream applications. Building on these two ideas, we scale our model using the transformer architecture, trained on a curated set of public images. Our experiments are conducted on 8 tasks involving 16 datasets in total, demonstrating strong off-the-shelf performance on both dense and global understanding, for several image-only and image-text tasks. Code and models are released at https://github.com/google-deepmind/tips.

Kevis-Kokitsi Maninis, Kaifeng Chen, Soham Ghosh, Arjun Karpur, Koert Chen, Ye Xia, Bingyi Cao, Daniel Salz, Guangxing Han, Jan Dlabal, Dan Gnanapragasam, Mojtaba Seyedhosseini, Howard Zhou, Andre Araujo• 2024

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K
mIoU54.1
366
3D Semantic SegmentationScanNet V2 (val)
mIoU39.7
209
Depth EstimationNYU Depth V2--
209
Monocular Depth EstimationKITTI--
203
Text-to-Image RetrievalCOCO--
156
Image-to-Text RetrievalCOCO--
149
Semantic segmentationPC-59
mIoU33.5
148
Monocular Depth EstimationNYU V2--
131
Semantic segmentationPascal VOC
mIoU86.7
129
Semantic segmentationVOC21
mIoU30.5
108
Showing 10 of 29 rows

Other info

Follow for update