Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

TinyLLaVA: A Framework of Small-scale Large Multimodal Models

About

We present the TinyLLaVA framework that provides a unified perspective in designing and analyzing the small-scale Large Multimodal Models (LMMs). We empirically study the effects of different vision encoders, connection modules, language models, training data and training recipes. Our extensive experiments showed that better quality of data combined with better training recipes, smaller LMMs can consistently achieve on-par performances compared to bigger LMMs. Under our framework, we train a family of small-scale LMMs. Our best model, TinyLLaVA-3.1B, achieves better overall performance against existing 7B models such as LLaVA-1.5 and Qwen-VL. We hope our findings can serve as baselines for future research in terms of data scaling, training setups and model selections. Our model weights and codes will be made public.

Baichuan Zhou, Ying Hu, Xi Weng, Junlong Jia, Jie Luo, Xien Liu, Ji Wu, Lei Huang• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVizWiz
Accuracy28.7
1525
Object Hallucination EvaluationPOPE
Accuracy88.5
1455
Visual Question AnsweringVQA v2
Accuracy79.9
1362
Visual Question AnsweringTextVQA
Accuracy59.1
1285
Visual Question AnsweringGQA
Accuracy62
1249
Text-based Visual Question AnsweringTextVQA
Accuracy59.7
807
Multimodal EvaluationMME
Score1.46e+3
658
Multimodal UnderstandingMMBench
Accuracy66.9
637
Multimodal UnderstandingMM-Vet
MM-Vet Score32
531
Visual Question AnsweringGQA
Accuracy65.6
505
Showing 10 of 62 rows

Other info

Code

Follow for update