Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

XShare: Collaborative in-Batch Expert Sharing for Faster MoE Inference

About

Mixture-of-Experts (MoE) architectures are increasingly used to efficiently scale large language models. However, in production inference, request batching and speculative decoding significantly amplify expert activation, eroding these efficiency benefits. We address this issue by modeling batch-aware expert selection as a modular optimization problem and designing efficient greedy algorithms for different deployment settings. The proposed method, namely XShare, requires no retraining and dynamically adapts to each batch by maximizing the total gating score of selected experts. It reduces expert activation by up to 30% under standard batching, cuts peak GPU load by up to 3x in expert-parallel deployments, and achieves up to 14% throughput gains in speculative decoding via hierarchical, correlation-aware expert selection even if requests in a batch drawn from heterogeneous datasets.

Daniil Vankov, Nikita Ivkin, Kyle Ulrich, Xiang Song, Ashish Khetan, George Karypis• 2026

Related benchmarks

TaskDatasetResultRank
Instruction FollowingIFEval
Accuracy (0-100)69.6
292
Mathematical ReasoningAIME 2025
OTPS186.5
9
Science Question AnsweringGPQA
OTPS180.3
9
Mathematical ReasoningGSM-8K
Accuracy94.6
2
Showing 4 of 4 rows

Other info

Follow for update