Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Long-Tailed Distribution-Aware Router For Mixture-of-Experts in Large Vision-Language Model

About

The mixture-of-experts (MoE) architecture, which replaces dense networks with sparse ones, has attracted significant attention in large vision-language models (LVLMs) for achieving comparable performance while activating far fewer parameters. Existing MoE architectures for LVLMs primarily focus on token-to-expert routing (TER), encouraging different experts to specialize in processing specific tokens. However, these methods typically rely on the load balancing mechanism, neglecting the inherent distributional differences between vision and language modalities. To address this limitation, we propose the Long-Tailed Distribution-aware Router (LTDR) for vision-language TER, which tackles two key challenges: (1) Modality-specific distribution-aware routing. We observe that language TER generally follows a relatively uniform distribution, whereas vision TER exhibits a long-tailed distribution. This modality discrepancy motivates the design of specialized routing strategies for each modality. (2) Vision-specific dynamic expert activation. Recognizing the importance of high-information vision tail tokens, we introduce a data-augmentation-inspired strategy that increases the number of activated experts, ensuring sufficient learning for these rare but informative tokens. On vision-language and vision benchmarks, our approach achieves consistent improvements, boosting performance by 1.2% / 2.1% on vision-language and 1.6% on vision benchmarks.

Chaoxiang Cai, Longrong Yang, Minghe Weng, Xuewei Li, Zequn Qin, Xi Li• 2025

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE
Accuracy87.5
1455
Visual Question AnsweringTextVQA
Accuracy52.9
1285
Multimodal EvaluationMM-Vet
Score34.9
180
Visual Question AnsweringGQA
GQA Score62.2
85
Multimodal Model EvaluationMME
Total Score1.45e+3
71
Domain GeneralizationOfficeHome, PACS and VLCS (Average)
Accuracy86.1
26
Multimodal BenchmarkingMMBench
Accuracy66.8
10
Vision-Language UnderstandingVision-Language Evaluation Suite (ChartQA, DocVQA, AI2D, VQA, AndroidControl, CountBenchQA)
ChartQA Accuracy68.1
2
Showing 8 of 8 rows

Other info

Follow for update