Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

What is Where by Looking: Weakly-Supervised Open-World Phrase-Grounding without Text Inputs

About

Given an input image, and nothing else, our method returns the bounding boxes of objects in the image and phrases that describe the objects. This is achieved within an open world paradigm, in which the objects in the input image may not have been encountered during the training of the localization mechanism. Moreover, training takes place in a weakly supervised setting, where no bounding boxes are provided. To achieve this, our method combines two pre-trained networks: the CLIP image-to-text matching score and the BLIP image captioning tool. Training takes place on COCO images and their captions and is based on CLIP. Then, during inference, BLIP is used to generate a hypothesis regarding various regions of the current image. Our work generalizes weakly supervised segmentation and phrase grounding and is shown empirically to outperform the state of the art in both domains. It also shows very convincing results in the novel task of weakly-supervised open-world purely visual phrase-grounding presented in our work. For example, on the datasets used for benchmarking phrase-grounding, our method results in a very modest degradation in comparison to methods that employ human captions as an additional input. Our code is available at https://github.com/talshaharabany/what-is-where-by-looking and a live demo can be found at https://replicate.com/talshaharabany/what-is-where-by-looking.

Tal Shaharabany, Yoad Tewel, Lior Wolf• 2022

Related benchmarks

TaskDatasetResultRank
Referring Image SegmentationRefCOCO+ (test-B)
mIoU21.6
200
Referring Image SegmentationRefCOCO (val)
mIoU18.3
197
Referring Image SegmentationRefCOCO (test A)
mIoU17.4
178
Referring Image SegmentationRefCOCO (test-B)
mIoU19.9
119
Referring Image SegmentationRefCOCO+ (val)
mIoU19.9
117
Referring Image SegmentationRefCOCO+ (testA)
mIoU18.7
45
Weakly Supervised Object LocalizationCUB-200-2011 (test)
Accuracy96.54
38
Phrase LocalizationVisualGenome (VG) (test)
Pointing Accuracy62.31
29
Phrase groundingFlickr30K--
20
Phrase groundingReferIt (test)
Pointing Accuracy65.95
18
Showing 10 of 20 rows

Other info

Code

Follow for update