Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MoVE-KD: Knowledge Distillation for VLMs with Mixture of Visual Encoders

About

Visual encoders are fundamental components in vision-language models (VLMs), each showcasing unique strengths derived from various pre-trained visual foundation models. To leverage the various capabilities of these encoders, recent studies incorporate multiple encoders within a single VLM, leading to a considerable increase in computational cost. In this paper, we present Mixture-of-Visual-Encoder Knowledge Distillation (MoVE-KD), a novel framework that distills the unique proficiencies of multiple vision encoders into a single, efficient encoder model. Specifically, to mitigate conflicts and retain the unique characteristics of each teacher encoder, we employ low-rank adaptation (LoRA) and mixture-of-experts (MoEs) to selectively activate specialized knowledge based on input features, enhancing both adaptability and efficiency. To regularize the KD process and enhance performance, we propose an attention-based distillation strategy that adaptively weighs the different encoders and emphasizes valuable visual tokens, reducing the burden of replicating comprehensive but distinct features from multiple teachers. Comprehensive experiments on popular VLMs, such as LLaVA and LLaVA-NeXT, validate the effectiveness of our method. Our code is available at: https://github.com/hey-cjj/MoVE-KD.

Jiajun Cao, Yuan Zhang, Tao Huang, Ming Lu, Qizhe Zhang, Ruichuan An, Ningning MA, Shanghang Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2
Accuracy83.1
1165
Visual Question AnsweringTextVQA
Accuracy65.8
1117
Visual Question AnsweringVizWiz
Accuracy60.9
1043
Visual Question AnsweringGQA
Accuracy65.7
963
Object Hallucination EvaluationPOPE--
935
Multimodal EvaluationMME
Score1.58e+3
557
Text-based Visual Question AnsweringTextVQA
Accuracy44.3
496
Multimodal UnderstandingMMBench
Accuracy48.8
367
Visual Question AnsweringScienceQA
Accuracy73.7
210
Multimodal Model EvaluationMMBench
Accuracy70.6
180
Showing 10 of 11 rows

Other info

Code

Follow for update