Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Adapting CLIP For Phrase Localization Without Further Training

About

Supervised or weakly supervised methods for phrase localization (textual grounding) either rely on human annotations or some other supervised models, e.g., object detectors. Obtaining these annotations is labor-intensive and may be difficult to scale in practice. We propose to leverage recent advances in contrastive language-vision models, CLIP, pre-trained on image and caption pairs collected from the internet. In its original form, CLIP only outputs an image-level embedding without any spatial resolution. We adapt CLIP to generate high-resolution spatial feature maps. Importantly, we can extract feature maps from both ViT and ResNet CLIP model while maintaining the semantic properties of an image embedding. This provides a natural framework for phrase localization. Our method for phrase localization requires no human annotations or additional training. Extensive experiments show that our method outperforms existing no-training methods in zero-shot phrase localization, and in some cases, it even outperforms supervised methods. Code is available at https://github.com/pals-ttic/adapting-CLIP .

Jiahao Li, Greg Shakhnarovich, Raymond A. Yeh• 2022

Related benchmarks

TaskDatasetResultRank
Visual GroundingRefCOCO+ (val)
Accuracy17.5
171
Visual GroundingRefCOCO+ (testB)
Accuracy19.6
169
Visual GroundingRefCOCO+ (testA)
Accuracy18.9
168
Visual GroundingRefCOCO (testB)
Accuracy18
125
Visual GroundingRefCOCO (val)
Accuracy16.7
119
Visual GroundingRefCOCO (testA)
Accuracy18.4
117
Semantic segmentationCOCO Stuff-27 (val)
mIoU1.64e+3
75
Object DetectionCOCO
mAP@0.30.149
5
Object DetectionPascal VOC
mAP@0.328.7
5
Showing 9 of 9 rows

Other info

Follow for update