Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

What does CLIP know about a red circle? Visual prompt engineering for VLMs

About

Large-scale Vision-Language Models, such as CLIP, learn powerful image-text representations that have found numerous applications, from zero-shot classification to text-to-image generation. Despite that, their capabilities for solving novel discriminative tasks via prompting fall behind those of large language models, such as GPT-3. Here we explore the idea of visual prompt engineering for solving computer vision tasks beyond classification by editing in image space instead of text. In particular, we discover an emergent ability of CLIP, where, by simply drawing a red circle around an object, we can direct the model's attention to that region, while also maintaining global information. We show the power of this simple approach by achieving state-of-the-art in zero-shot referring expressions comprehension and strong performance in keypoint localization tasks. Finally, we draw attention to some potential ethical concerns of large language-vision models.

Aleksandar Shtedritski, Christian Rupprecht, Andrea Vedaldi• 2023

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVizWiz
Accuracy69.2
1525
Multimodal UnderstandingMMMU
Accuracy51.6
437
Referring Expression ComprehensionRefCOCO+ (val)
Accuracy43.9
354
Referring Expression ComprehensionRefCOCO (val)
Accuracy38
344
Referring Expression ComprehensionRefCOCO (testA)
Accuracy0.453
342
Multimodal UnderstandingMMStar--
324
Referring Expression ComprehensionRefCOCOg (test)
Accuracy47.3
300
Referring Expression ComprehensionRefCOCOg (val)
Accuracy47.2
300
Mathematical ReasoningMathVista
Accuracy69.1
257
Referring Expression ComprehensionRefCOCO+ (testB)
Accuracy37.1
244
Showing 10 of 24 rows

Other info

Follow for update