Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Delta Decompression for MoE-based LLMs Compression

About

Mixture-of-Experts (MoE) architectures in large language models (LLMs) achieve exceptional performance, but face prohibitive storage and memory requirements. To address these challenges, we present $D^2$-MoE, a new delta decompression compressor for reducing the parameters of MoE LLMs. Based on observations of expert diversity, we decompose their weights into a shared base weight and unique delta weights. Specifically, our method first merges each expert's weight into the base weight using the Fisher information matrix to capture shared components. Then, we compress delta weights through Singular Value Decomposition (SVD) by exploiting their low-rank properties. Finally, we introduce a semi-dynamical structured pruning strategy for the base weights, combining static and dynamic redundancy analysis to achieve further parameter reduction while maintaining input adaptivity. In this way, our $D^2$-MoE successfully compact MoE LLMs to high compression ratios without additional training. Extensive experiments highlight the superiority of our approach, with over 13% performance gains than other compressors on Mixtral|Phi-3.5|DeepSeek|Qwen2 MoE LLMs at 40$\sim$60% compression rates. Codes are available in https://github.com/lliai/D2MoE.

Hao Gu, Wei Li, Lujun Li, Qiyuan Zhu, Mark Lee, Shengjie Sun, Wei Xue, Yike Guo• 2025

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy61
1891
Language ModelingWikiText-2
Perplexity (PPL)6.84
1624
Commonsense ReasoningWinoGrande
Accuracy75
1085
Question AnsweringARC Challenge
Accuracy45
906
Question AnsweringARC Easy
Accuracy75
597
Physical Commonsense ReasoningPIQA
Accuracy79
572
Mathematical ReasoningMathQA
Accuracy36
305
Language ModelingC4
C4 Loss12.62
121
Language ModelingPennTreeBank (PTB)
PPL11.1
87
Zero-shot EvaluationARC-Easy, ARC-Challenge, OpenBookQA, WinoGrande, PIQA, HellaSwag, MathQA, RTE, BoolQ zero-shot
Mean Accuracy66.35
59
Showing 10 of 10 rows

Other info

Follow for update