Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Scaffolding Coordinates to Promote Vision-Language Coordination in Large Multi-Modal Models

About

State-of-the-art Large Multi-Modal Models (LMMs) have demonstrated exceptional capabilities in vision-language tasks. Despite their advanced functionalities, the performances of LMMs are still limited in challenging scenarios that require complex reasoning with multiple levels of visual information. Existing prompting techniques for LMMs focus on either improving textual reasoning or leveraging tools for image preprocessing, lacking a simple and general visual prompting scheme to promote vision-language coordination in LMMs. In this work, we propose Scaffold prompting that scaffolds coordinates to promote vision-language coordination. Specifically, Scaffold overlays a dot matrix within the image as visual information anchors and leverages multi-dimensional coordinates as textual positional references. Extensive experiments on a wide range of challenging vision-language tasks demonstrate the superiority of Scaffold over GPT-4V with the textual CoT prompting. Our code is released in https://github.com/leixy20/Scaffold.

Xuanyu Lei, Zonghan Yang, Xinrui Chen, Peng Li, Yang Liu• 2024

Related benchmarks

TaskDatasetResultRank
Allocentric Spatial ReasoningCOMFORT#
Left/Right Accuracy52.17
19
Allocentric Spatial Reasoning3DSRBench
Left/Right Acc34.81
19
Showing 2 of 2 rows

Other info

Follow for update