Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Lynx: Enabling Efficient MoE Inference through Dynamic Batch-Aware Expert Selection

About

Selective parameter activation provided by Mixture-of-Expert (MoE) models have made them a popular choice in modern foundational models. However, MoEs face a fundamental tension when employed for serving. Batching, critical for performance in serving, forces the activation of all experts, thereby negating MoEs' benefits and exacerbating memory bandwidth bottlenecks. Existing work on efficient MoE inference are unable to resolve this tension even with extensive workload-specific tuning. We present LYNX, a system that enables efficient MoE inference in a workload-agnostic fashion. Exploiting several key observations that we uncover in this work, LYNX provides a light-weight run-time dynamic expert remapping technique that depends only on information already available in the models. Our evaluation of LYNX on four state-of-the-art model families across nine benchmarks shows that it achieves up to 1.23x improvement in throughput while simultaneously improving accuracy by up to 4% in the majority of the tasks, and incurs only a negligible accuracy loss of less than 1% points in significantly hard tasks. Further, LYNX is complementary to existing techniques where it additionally boosts their performance by up to 1.38x.

Vima Gupta, Jae Hyung Ju, Kartik Sinha, Ada Gavrilovska, Anand Padmanabha Iyer• 2024

Related benchmarks

TaskDatasetResultRank
Large Language Model EvaluationOpenCompass
cMMLU81.36
11
ReasoningOpenCompass (test)
CMMLU42.57
11
Showing 2 of 2 rows

Other info

Follow for update