Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MoE-LLaVA: Mixture of Experts for Large Vision-Language Models

About

Recent advances demonstrate that scaling Large Vision-Language Models (LVLMs) effectively improves downstream task performances. However, existing scaling methods enable all model parameters to be active for each token in the calculation, which brings massive training and inferring costs. In this work, we propose a simple yet effective training strategy MoE-Tuning for LVLMs. This strategy innovatively addresses the common issue of performance degradation in multi-modal sparsity learning, consequently constructing a sparse model with an outrageous number of parameters but a constant computational cost. Furthermore, we present the MoE-LLaVA, a MoE-based sparse LVLM architecture, which uniquely activates only the top-k experts through routers during deployment, keeping the remaining experts inactive. Extensive experiments show the significant performance of MoE-LLaVA in a variety of visual understanding and object hallucination benchmarks. Remarkably, with only approximately 3B sparsely activated parameters, MoE-LLaVA demonstrates performance comparable to the LLaVA-1.5-7B on various visual understanding datasets and even surpasses the LLaVA-1.5-13B in object hallucination benchmark. Through MoE-LLaVA, we aim to establish a baseline for sparse LVLMs and provide valuable insights for future research in developing more efficient and effective multi-modal learning systems. Code is released at https://github.com/PKU-YuanGroup/MoE-LLaVA.

Bin Lin, Zhenyu Tang, Yang Ye, Jinfa Huang, Junwu Zhang, Yatian Pang, Peng Jin, Munan Ning, Jiebo Luo, Li Yuan• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVizWiz
Accuracy43.9
1525
Object Hallucination EvaluationPOPE
Accuracy87
1455
Visual Question AnsweringVQA v2
Accuracy79.9
1362
Visual Question AnsweringTextVQA
Accuracy57
1285
Visual Question AnsweringGQA
Accuracy62.6
1249
Text-based Visual Question AnsweringTextVQA
Accuracy48
807
Multimodal EvaluationMME
Score1.42e+3
658
Multimodal UnderstandingMMBench
Accuracy65.2
637
Multimodal UnderstandingMM-Vet
MM-Vet Score35.9
531
Visual Question AnsweringGQA
Accuracy61.5
505
Showing 10 of 62 rows

Other info

Code

Follow for update