Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs

About

We introduce Cambrian-1, a family of multimodal LLMs (MLLMs) designed with a vision-centric approach. While stronger language models can enhance multimodal capabilities, the design choices for vision components are often insufficiently explored and disconnected from visual representation learning research. This gap hinders accurate sensory grounding in real-world scenarios. Our study uses LLMs and visual instruction tuning as an interface to evaluate various visual representations, offering new insights into different models and architectures -- self-supervised, strongly supervised, or combinations thereof -- based on experiments with over 20 vision encoders. We critically examine existing MLLM benchmarks, address the difficulties involved in consolidating and interpreting results from various tasks, and introduce a new vision-centric benchmark, CV-Bench. To further improve visual grounding, we propose the Spatial Vision Aggregator (SVA), a dynamic and spatially-aware connector that integrates high-resolution vision features with LLMs while reducing the number of tokens. Additionally, we discuss the curation of high-quality visual instruction-tuning data from publicly available sources, emphasizing the importance of data source balancing and distribution ratio. Collectively, Cambrian-1 not only achieves state-of-the-art performance but also serves as a comprehensive, open cookbook for instruction-tuned MLLMs. We provide model weights, code, supporting tools, datasets, and detailed instruction-tuning and evaluation recipes. We hope our release will inspire and accelerate advancements in multimodal systems and visual representation learning.

Shengbang Tong, Ellis Brown, Penghao Wu, Sanghyun Woo, Manoj Middepogu, Sai Charitha Akula, Jihan Yang, Shusheng Yang, Adithya Iyer, Xichen Pan, Ziteng Wang, Rob Fergus, Yann LeCun, Saining Xie• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringTextVQA
Accuracy71.7
1117
Visual Question AnsweringGQA
Accuracy64.6
963
Multimodal EvaluationMME--
557
Text-based Visual Question AnsweringTextVQA
Accuracy76.7
496
Multimodal UnderstandingMM-Vet
MM-Vet Score53.2
418
Visual Question AnsweringVQA 2.0 (test-dev)
Accuracy83.8
337
Mathematical ReasoningMathVista
Score50.3
322
Visual Question AnsweringTextVQA (val)
VQA Score76.7
309
OCR EvaluationOCRBench
Score624
296
Multimodal ReasoningMM-Vet
MM-Vet Score51.7
281
Showing 10 of 108 rows
...

Other info

Follow for update