Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

V*: Guided Visual Search as a Core Mechanism in Multimodal LLMs

About

When we look around and perform complex tasks, how we see and selectively process what we see is crucial. However, the lack of this visual search mechanism in current multimodal LLMs (MLLMs) hinders their ability to focus on important visual details, especially when handling high-resolution and visually crowded images. To address this, we introduce V*, an LLM-guided visual search mechanism that employs the world knowledge in LLMs for efficient visual querying. When combined with an MLLM, this mechanism enhances collaborative reasoning, contextual understanding, and precise targeting of specific visual elements. This integration results in a new MLLM meta-architecture, named Show, sEArch, and TelL (SEAL). We further create V*Bench, a benchmark specifically designed to evaluate MLLMs in their ability to process high-resolution images and focus on visual details. Our study highlights the necessity of incorporating visual search capabilities into multimodal systems. The code is available https://github.com/penghao-wu/vstar.

Penghao Wu, Saining Xie• 2023

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE
Accuracy82.4
935
Multimodal EvaluationMME
Score1.13e+3
557
Multimodal UnderstandingMMBench
Accuracy33.1
367
Multimodal Capability EvaluationMM-Vet
Score27.7
282
Multimodal ReasoningMM-Vet
MM-Vet Score27.7
281
Multimodal UnderstandingMME
MME Score1.13e+3
158
Multimodal EvaluationSEED-Bench
Accuracy41.7
80
Multimodal ConversationLLaVA-Bench Wild
Score59.1
52
Fine-grained Visual Question AnsweringV*Bench
Overall Accuracy73.68
28
Fine-grained Visual Question AnsweringHRBench-8K
Overall Accuracy33.5
28
Showing 10 of 21 rows

Other info

Follow for update