Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Efficient Multimodal Learning from Data-centric Perspective

About

Multimodal Large Language Models (MLLMs) have demonstrated notable capabilities in general visual understanding and reasoning tasks. However, their deployment is hindered by substantial computational costs in both training and inference, limiting accessibility to the broader research and user communities. A straightforward solution is to leverage smaller pre-trained vision and language models, which inevitably cause significant performance drops. In this paper, we demonstrate the possibility of training a smaller but better MLLM with high-quality training data. Specifically, we introduce Bunny, a family of lightweight MLLMs with flexible vision and language backbones for efficient multimodal learning from selected training data. Experiments show that our Bunny-4B/8B outperforms the state-of-the-art large MLLMs on multiple benchmarks. We expect that this work can provide the community with a clean and flexible open-source tool for further research and development. The code, models, and data can be found in https://github.com/BAAI-DCAI/Bunny.

Muyang He, Yexin Liu, Boya Wu, Jianhao Yuan, Yueze Wang, Tiejun Huang, Bo Zhao• 2024

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE
Accuracy87.2
1455
Visual Question AnsweringVQA v2
Accuracy81.5
1362
Visual Question AnsweringGQA
Accuracy63.5
1249
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy82.9
706
Multimodal EvaluationMME--
658
Multimodal UnderstandingMMBench
Accuracy72.9
637
Multimodal UnderstandingMM-Vet
MM-Vet Score39.1
531
Science Question AnsweringScienceQA--
502
Mathematical ReasoningMathVista
Score31.5
385
Visual Question AnsweringChartQA
Accuracy30.1
371
Showing 10 of 76 rows
...

Other info

Code

Follow for update