Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Rethinking Visual Prompting for Multimodal Large Language Models with External Knowledge

About

In recent years, multimodal large language models (MLLMs) have made significant strides by training on vast high-quality image-text datasets, enabling them to generally understand images well. However, the inherent difficulty in explicitly conveying fine-grained or spatially dense information in text, such as masks, poses a challenge for MLLMs, limiting their ability to answer questions requiring an understanding of detailed or localized visual elements. Drawing inspiration from the Retrieval-Augmented Generation (RAG) concept, this paper proposes a new visual prompt approach to integrate fine-grained external knowledge, gleaned from specialized vision models (e.g., instance segmentation/OCR models), into MLLMs. This is a promising yet underexplored direction for enhancing MLLMs' performance. Our approach diverges from concurrent works, which transform external knowledge into additional text prompts, necessitating the model to indirectly learn the correspondence between visual content and text coordinates. Instead, we propose embedding fine-grained knowledge information directly into a spatial embedding map as a visual prompt. This design can be effortlessly incorporated into various MLLMs, such as LLaVA and Mipha, considerably improving their visual understanding performance. Through rigorous experiments, we demonstrate that our method can enhance MLLM performance across nine benchmarks, amplifying their fine-grained context-aware capabilities.

Yuanze Lin, Yunsheng Li, Dongdong Chen, Weijian Xu, Ronald Clark, Philip Torr, Lu Yuan• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringTextVQA
Accuracy59.8
1117
Visual Question AnsweringGQA
Accuracy63.3
963
Multimodal UnderstandingMM-Vet
MM-Vet Score34.9
418
Multimodal UnderstandingMMBench--
367
Visual Question AnsweringVQAv2
Accuracy79.8
177
Hallucination EvaluationPOPE
Accuracy88.9
132
Science Question AnsweringSciQA-IMG
SciQA-IMG Accuracy69.5
53
Text-based Visual Question AnsweringTextVQA 52
Accuracy59.8
23
Science Question AnsweringScienceQA IMG 38
Accuracy71.8
21
Multimodal BenchmarkingMM-Bench 37
Accuracy71.5
19
Showing 10 of 16 rows

Other info

Follow for update