Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs

About

Is vision good enough for language? Recent advancements in multimodal models primarily stem from the powerful reasoning abilities of large language models (LLMs). However, the visual component typically depends only on the instance-level contrastive language-image pre-training (CLIP). Our research reveals that the visual capabilities in recent multimodal LLMs (MLLMs) still exhibit systematic shortcomings. To understand the roots of these errors, we explore the gap between the visual embedding space of CLIP and vision-only self-supervised learning. We identify ''CLIP-blind pairs'' - images that CLIP perceives as similar despite their clear visual differences. With these pairs, we construct the Multimodal Visual Patterns (MMVP) benchmark. MMVP exposes areas where state-of-the-art systems, including GPT-4V, struggle with straightforward questions across nine basic visual patterns, often providing incorrect answers and hallucinated explanations. We further evaluate various CLIP-based vision-and-language models and found a notable correlation between visual patterns that challenge CLIP models and those problematic for multimodal LLMs. As an initial effort to address these issues, we propose a Mixture of Features (MoF) approach, demonstrating that integrating vision self-supervised learning features with MLLMs can significantly enhance their visual grounding capabilities. Together, our research suggests visual representation learning remains an open challenge, and accurate visual grounding is crucial for future successful multimodal systems.

Shengbang Tong, Zhuang Liu, Yuexiang Zhai, Yi Ma, Yann LeCun, Saining Xie• 2024

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE
Accuracy86.7
1455
Visual Question AnsweringVQA v2
Accuracy79.3
1362
Visual Question AnsweringTextVQA
Accuracy58.7
1285
Multimodal UnderstandingMM-Vet
MM-Vet Score34.6
531
Multimodal UnderstandingMMBench (MMB)
Accuracy65.4
141
Multimodal ConversationLLaVA-Bench Wild
Score73.3
65
Multimodal Visual Pattern UnderstandingMMVP
Accuracy31.3
25
Multimodal Large Language Model EvaluationMLLM Evaluation Suite
Average Score (All)51.4
22
Image ClassificationImageNet-1K--
10
Visual Question AnsweringMMVP-VLM--
10
Showing 10 of 11 rows

Other info

Follow for update