MoE-Infinity: Efficient MoE Inference on Personal Machines with Sparsity-Aware Expert Cache
About
This paper presents MoE-Infinity, an efficient MoE inference system designed for personal machines with limited GPU memory capacity. The key idea for MoE-Infinity is that on personal machines, which are often single-user environments, MoE-based LLMs typically operate with a batch size of one. In this setting, MoE models exhibit a high degree of activation sparsity, meaning a small number of experts are frequently reused in generating tokens during the decode phase. Leveraging this idea, we design a sparsity-aware expert cache, which can trace the sparse activation of experts during inference and carefully select the trace that represents the sparsity pattern. By analyzing these selected traces, MoE-Infinity guides the replacement and prefetching of the expert cache, providing 3.1-16.7x per-token latency improvements over numerous state-of-the-art systems, including vLLM, Ollama, DeepSpeed and BrainStorm across various MoE models (DeepSeek and Mixtral) when handling different LLM tasks. MoE-Infinity's source code is publicly available at https://github.com/EfficientMoE/MoE-Infinity
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Mathematical Reasoning | GSM8K | -- | 246 | |
| Instruction Following | Alpaca | -- | 111 | |
| Question Answering | QA | -- | 47 | |
| Text Summarization | CNN/DM | TPS3.56 | 13 | |
| Chat Evaluation | MT-Bench | Throughput (TPS)3.64 | 10 | |
| Code Generation | HumanEval | TPS (Tokens/s)3.63 | 10 | |
| Language Understanding | MMLU-Pro | TPS3.64 | 10 |