Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Mixture Compressor for Mixture-of-Experts LLMs Gains More

About

Mixture-of-Experts large language models (MoE-LLMs) marks a significant step forward of language models, however, they encounter two critical challenges in practice: 1) expert parameters lead to considerable memory consumption and loading latency; and 2) the current activated experts are redundant, as many tokens may only require a single expert. Motivated by these issues, we investigate the MoE-LLMs and make two key observations: a) different experts exhibit varying behaviors on activation reconstruction error, routing scores, and activated frequencies, highlighting their differing importance, and b) not all tokens are equally important -- only a small subset is critical. Building on these insights, we propose MC, a training-free Mixture-Compressor for MoE-LLMs, which leverages the significance of both experts and tokens to achieve an extreme compression. First, to mitigate storage and loading overheads, we introduce Pre-Loading Mixed-Precision Quantization, which formulates the adaptive bit-width allocation as a Linear Programming problem, where the objective function balances multi-factors reflecting the importance of each expert. Additionally, we develop Online Dynamic Pruning, which identifies important tokens to retain and dynamically select activated experts for other tokens during inference to optimize efficiency while maintaining performance. Our MC integrates static quantization and dynamic pruning to collaboratively achieve extreme compression for MoE-LLMs with less accuracy loss, ensuring an optimal trade-off between performance and efficiency. Extensive experiments confirm the effectiveness of our approach. For instance, at 2.54 bits, MC compresses 76.6% of the model, with only a 3.8% average accuracy loss. During dynamic inference, we further reduce activated parameters by 15%, with a performance drop of less than 0.6%.

Wei Huang, Yue Liao, Jianhui Liu, Ruifei He, Haoru Tan, Shiming Zhang, Hongsheng Li, Si Liu, Xiaojuan Qi• 2024

Related benchmarks

TaskDatasetResultRank
Video UnderstandingMVBench
Accuracy67.38
247
Video UnderstandingVideoMME--
192
Chart UnderstandingChartQA
Accuracy83.22
83
Visual Question AnsweringTextVQA
Accuracy86.28
69
Video UnderstandingEgoSchema
Accuracy59.03
49
Image UnderstandingMME
Score2.19e+3
39
Multi-modal UnderstandingMMVet
Accuracy68.67
35
Image UnderstandingImage Understanding Suite (TextVQA, ChartQA, MMStar, MMBench, MMVet, MME, RealWorldQA, COCO)
TextVQA Score82.51
34
Video UnderstandingVideo Understanding Suite MVBench, EgoSchema, VMME, LVB, VMMMU
MVBench Score62.61
34
Real-world Visual UnderstandingRealworldQA
Accuracy62.13
24
Showing 10 of 18 rows

Other info

Follow for update