Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

V*: Guided Visual Search as a Core Mechanism in Multimodal LLMs

About

When we look around and perform complex tasks, how we see and selectively process what we see is crucial. However, the lack of this visual search mechanism in current multimodal LLMs (MLLMs) hinders their ability to focus on important visual details, especially when handling high-resolution and visually crowded images. To address this, we introduce V*, an LLM-guided visual search mechanism that employs the world knowledge in LLMs for efficient visual querying. When combined with an MLLM, this mechanism enhances collaborative reasoning, contextual understanding, and precise targeting of specific visual elements. This integration results in a new MLLM meta-architecture, named Show, sEArch, and TelL (SEAL). We further create V*Bench, a benchmark specifically designed to evaluate MLLMs in their ability to process high-resolution images and focus on visual details. Our study highlights the necessity of incorporating visual search capabilities into multimodal systems. The code is available https://github.com/penghao-wu/vstar.

Penghao Wu, Saining Xie• 2023

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE
Accuracy82.4
1455
Multimodal EvaluationMME
Score1.13e+3
658
Multimodal UnderstandingMMBench
Accuracy33.1
637
Visual Question AnsweringGQA
Accuracy59.8
505
Multimodal ReasoningMM-Vet
MM-Vet Score27.7
431
Multimodal Capability EvaluationMM-Vet
Score27.7
345
Document Visual Question AnsweringDocVQA
ANLS5.31
263
Multimodal UnderstandingMME
MME Score1.13e+3
207
Multimodal EvaluationSEED-Bench
Accuracy41.7
95
Visual ReasoningGQA
Accuracy50.18
93
Showing 10 of 35 rows

Other info

Follow for update