TinyLLaVA: A Framework of Small-scale Large Multimodal Models
About
We present the TinyLLaVA framework that provides a unified perspective in designing and analyzing the small-scale Large Multimodal Models (LMMs). We empirically study the effects of different vision encoders, connection modules, language models, training data and training recipes. Our extensive experiments showed that better quality of data combined with better training recipes, smaller LMMs can consistently achieve on-par performances compared to bigger LMMs. Under our framework, we train a family of small-scale LMMs. Our best model, TinyLLaVA-3.1B, achieves better overall performance against existing 7B models such as LLaVA-1.5 and Qwen-VL. We hope our findings can serve as baselines for future research in terms of data scaling, training setups and model selections. Our model weights and codes will be made public.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Question Answering | VQA v2 | Accuracy79.9 | 1165 | |
| Visual Question Answering | TextVQA | Accuracy59.1 | 1117 | |
| Visual Question Answering | VizWiz | Accuracy28.7 | 1043 | |
| Visual Question Answering | GQA | Accuracy62 | 963 | |
| Object Hallucination Evaluation | POPE | Accuracy88.5 | 935 | |
| Multimodal Evaluation | MME | Score1.46e+3 | 557 | |
| Text-based Visual Question Answering | TextVQA | Accuracy59.7 | 496 | |
| Multimodal Understanding | MM-Vet | MM-Vet Score32 | 418 | |
| Visual Question Answering | GQA | Accuracy65.6 | 374 | |
| Multimodal Understanding | MMBench | Accuracy66.9 | 367 |