Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Scaffolding Coordinates to Promote Vision-Language Coordination in Large Multi-Modal Models

About

State-of-the-art Large Multi-Modal Models (LMMs) have demonstrated exceptional capabilities in vision-language tasks. Despite their advanced functionalities, the performances of LMMs are still limited in challenging scenarios that require complex reasoning with multiple levels of visual information. Existing prompting techniques for LMMs focus on either improving textual reasoning or leveraging tools for image preprocessing, lacking a simple and general visual prompting scheme to promote vision-language coordination in LMMs. In this work, we propose Scaffold prompting that scaffolds coordinates to promote vision-language coordination. Specifically, Scaffold overlays a dot matrix within the image as visual information anchors and leverages multi-dimensional coordinates as textual positional references. Extensive experiments on a wide range of challenging vision-language tasks demonstrate the superiority of Scaffold over GPT-4V with the textual CoT prompting. Our code is released in https://github.com/leixy20/Scaffold.

Xuanyu Lei, Zonghan Yang, Xinrui Chen, Peng Li, Yang Liu• 2024

Related benchmarks

TaskDatasetResultRank
Science Question AnsweringScienceQA
Accuracy75
502
Visual Question AnsweringScienceQA
Accuracy76.3
370
Multimodal Model EvaluationMME
Score1.82e+3
98
Multimodal ReasoningM^3CoT
Accuracy53.6
70
Visual Question AnsweringLLaVA-W
ROUGE-L41.5
56
Visual Question AnsweringM3CoT
Accuracy56.7
56
Multimodal ReasoningM3CoT (test)
Total Acc44.9
47
Allocentric Spatial ReasoningCOMFORT#
Left/Right Accuracy52.17
19
Allocentric Spatial Reasoning3DSRBench
Left/Right Acc34.81
19
Multimodal ReasoningGQA (test)
Accuracy48.7
10
Showing 10 of 10 rows

Other info

Follow for update