Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VL-Mamba: Exploring State Space Models for Multimodal Learning

About

Multimodal large language models (MLLMs) have attracted widespread interest and have rich applications. However, the inherent attention mechanism in its Transformer structure requires quadratic complexity and results in expensive computational overhead. Therefore, in this work, we propose VL-Mamba, a multimodal large language model based on state space models, which have been shown to have great potential for long-sequence modeling with fast inference and linear scaling in sequence length. Specifically, we first replace the transformer-based backbone language model such as LLama or Vicuna with the pre-trained Mamba language model. Then, we empirically explore how to effectively apply the 2D vision selective scan mechanism for multimodal learning and the combinations of different vision encoders and variants of pretrained Mamba language models. The extensive experiments on diverse multimodal benchmarks with competitive performance show the effectiveness of our proposed VL-Mamba and demonstrate the great potential of applying state space models for multimodal learning tasks.

Yanyuan Qiao, Zheng Yu, Longteng Guo, Sihan Chen, Zijia Zhao, Mingzhen Sun, Qi Wu, Jing Liu• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2
Accuracy76.6
1165
Visual Question AnsweringTextVQA
Accuracy48.9
1117
Visual Question AnsweringGQA
Accuracy56.2
963
Object Hallucination EvaluationPOPE
Accuracy84.4
935
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy76.6
664
Multimodal UnderstandingMM-Vet
MM-Vet Score32.6
418
Visual Question AnsweringTextVQA (val)
VQA Score48.9
309
Multimodal Capability EvaluationMM-Vet
Score32.6
282
Visual Question AnsweringGQA (test-dev)
Accuracy56.2
178
Science Question AnsweringScienceQA SQA-IMG
Accuracy65.4
114
Showing 10 of 15 rows

Other info

Code

Follow for update