Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Efficient Multimodal Learning from Data-centric Perspective

About

Multimodal Large Language Models (MLLMs) have demonstrated notable capabilities in general visual understanding and reasoning tasks. However, their deployment is hindered by substantial computational costs in both training and inference, limiting accessibility to the broader research and user communities. A straightforward solution is to leverage smaller pre-trained vision and language models, which inevitably cause significant performance drops. In this paper, we demonstrate the possibility of training a smaller but better MLLM with high-quality training data. Specifically, we introduce Bunny, a family of lightweight MLLMs with flexible vision and language backbones for efficient multimodal learning from selected training data. Experiments show that our Bunny-4B/8B outperforms the state-of-the-art large MLLMs on multiple benchmarks. We expect that this work can provide the community with a clean and flexible open-source tool for further research and development. The code, models, and data can be found in https://github.com/BAAI-DCAI/Bunny.

Muyang He, Yexin Liu, Boya Wu, Jianhao Yuan, Yueze Wang, Tiejun Huang, Bo Zhao• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2
Accuracy81.5
1165
Visual Question AnsweringGQA
Accuracy63.5
963
Object Hallucination EvaluationPOPE
Accuracy87.2
935
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy82.9
664
Multimodal EvaluationMME--
557
Mathematical ReasoningMathVista
Score31.5
322
OCR EvaluationOCRBench
Score444
296
Multi-discipline Multimodal UnderstandingMMMU
Accuracy43.3
266
Science Question AnsweringScienceQA IMG
Accuracy70.9
256
Science Question AnsweringScienceQA--
229
Showing 10 of 59 rows

Other info

Code

Follow for update