LLAVADI: What Matters For Multimodal Large Language Models Distillation
About
The recent surge in Multimodal Large Language Models (MLLMs) has showcased their remarkable potential for achieving generalized intelligence by integrating visual understanding into Large Language Models.Nevertheless, the sheer model size of MLLMs leads to substantial memory and computational demands that hinder their widespread deployment. In this work, we do not propose a new efficient model structure or train small-scale MLLMs from scratch. Instead, we focus on what matters for training small-scale MLLMs through knowledge distillation, which is the first step from the multimodal distillation perspective. Our extensive studies involve training strategies, model choices, and distillation algorithms in the knowledge distillation process. These results show that joint alignment for both tokens and logit alignment plays critical roles in teacher-student frameworks. In addition, we draw a series of intriguing observations from this study. By evaluating different benchmarks and proper strategy, even a 2.7B small-scale model can perform on par with larger models with 7B or 13B parameters. Our code and models will be publicly available for further research.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Object Hallucination Evaluation | POPE | -- | 1455 | |
| Visual Question Answering | GQA | Accuracy61.4 | 1249 | |
| Text-based Visual Question Answering | TextVQA | Accuracy50.7 | 807 | |
| Multimodal Evaluation | MME | Score68.8 | 658 | |
| Multimodal Understanding | MMBench | Accuracy62.5 | 637 | |
| Science Question Answering | ScienceQA (SQA) | Accuracy64.1 | 273 | |
| Visual Question Answering | TextVQA v1.0 (val) | Accuracy45.3 | 84 | |
| Visual Question Answering | GQA v1.0 (test) | Accuracy55.4 | 31 | |
| Compositional Reasoning | Compositional Reasoning Suite Aggregated | Sugarcrepe Score76.9 | 23 | |
| Visual Question Answering | General VQA VQAv2, VizWiz, GQA, TextVQA, MME | GQA Accuracy58.7 | 23 |