Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Improving MoE Compute Efficiency by Composing Weight and Data Sparsity

About

Mixture-of-Experts layers achieve compute efficiency through weight sparsity: each token activates only a subset of experts. Data sparsity, where each expert processes only a subset of tokens, offers a complementary axis. Expert-choice routing implements data sparsity directly but violates causality in autoregressive models, creating train-inference mismatch. We recover data sparsity within causal token-choice MoE by leveraging zero-compute (null) experts within the routing pool. When a token routes to null experts, those slots consume no compute. The standard load balancing objective trains the model to uniformly use all experts (real and null) therefore creating data sparsity in expectation without the causality violations. We evaluate on vision-language model training, where data heterogeneity is pronounced: vision encoders produce many low-information tokens while text tokens are denser. At matched expected FLOPs, composing weight and data sparsity yields a more compute-efficient frontier than weight sparsity alone, with gains in training loss and downstream performance. The model learns implicit modality-aware allocation, routing vision tokens to null experts more aggressively than text, without explicit modality routing.

Maciej Kilian, Oleg Mkrtchyan, Luke Zettlemoyer, Akshat Shrivastava, Armen Aghajanyan• 2026

Related benchmarks

TaskDatasetResultRank
OCR EvaluationOCRBench
Score880
296
Comprehensive EvaluationSeedBench (all)
Score76.8
19
OCR-related understandingDocVQA
Score93.8
10
Knowledge / General QAMME
Score2.24e+3
6
General Vision-LanguageVQA v2
VQA v2 Accuracy82.6
5
OCRTextVQA
Score82
4
GeneralRealworldQA
Score0.751
4
GeneralBLINK
Score55.6
4
GeneralMathVista
Score73.9
4
GeneralAI2D
Score79.1
4
Showing 10 of 22 rows

Other info

Follow for update