Long-Tailed Distribution-Aware Router For Mixture-of-Experts in Large Vision-Language Model
About
The mixture-of-experts (MoE) architecture, which replaces dense networks with sparse ones, has attracted significant attention in large vision-language models (LVLMs) for achieving comparable performance while activating far fewer parameters. Existing MoE architectures for LVLMs primarily focus on token-to-expert routing (TER), encouraging different experts to specialize in processing specific tokens. However, these methods typically rely on the load balancing mechanism, neglecting the inherent distributional differences between vision and language modalities. To address this limitation, we propose the Long-Tailed Distribution-aware Router (LTDR) for vision-language TER, which tackles two key challenges: (1) Modality-specific distribution-aware routing. We observe that language TER generally follows a relatively uniform distribution, whereas vision TER exhibits a long-tailed distribution. This modality discrepancy motivates the design of specialized routing strategies for each modality. (2) Vision-specific dynamic expert activation. Recognizing the importance of high-information vision tail tokens, we introduce a data-augmentation-inspired strategy that increases the number of activated experts, ensuring sufficient learning for these rare but informative tokens. On vision-language and vision benchmarks, our approach achieves consistent improvements, boosting performance by 1.2% / 2.1% on vision-language and 1.6% on vision benchmarks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Object Hallucination Evaluation | POPE | Accuracy87.5 | 1455 | |
| Visual Question Answering | TextVQA | Accuracy52.9 | 1285 | |
| Multimodal Evaluation | MM-Vet | Score34.9 | 180 | |
| Visual Question Answering | GQA | GQA Score62.2 | 85 | |
| Multimodal Model Evaluation | MME | Total Score1.45e+3 | 71 | |
| Domain Generalization | OfficeHome, PACS and VLCS (Average) | Accuracy86.1 | 26 | |
| Multimodal Benchmarking | MMBench | Accuracy66.8 | 10 | |
| Vision-Language Understanding | Vision-Language Evaluation Suite (ChartQA, DocVQA, AI2D, VQA, AndroidControl, CountBenchQA) | ChartQA Accuracy68.1 | 2 |