Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy

About

Sparsely activated Mixture-of-Experts (SMoE) has shown promise to scale up the learning capacity of neural networks, however, they have issues like (a) High Memory Usage, due to duplication of the network layers into multiple copies as experts; and (b) Redundancy in Experts, as common learning-based routing policies suffer from representational collapse. Therefore, vanilla SMoE models are memory inefficient and non-scalable, especially for resource-constrained downstream scenarios. In this paper, we ask: Can we craft a compact SMoE model by consolidating expert information? What is the best recipe to merge multiple experts into fewer but more knowledgeable experts? Our pilot investigation reveals that conventional model merging methods fail to be effective in such expert merging for SMoE. The potential reasons are: (1) redundant information overshadows critical experts; (2) appropriate neuron permutation for each expert is missing to bring all of them in alignment. To address this, we propose M-SMoE, which leverages routing statistics to guide expert merging. Specifically, it starts with neuron permutation alignment for experts; then, dominant experts and their "group members" are formed; lastly, every expert group is merged into a single expert by utilizing each expert's activation frequency as their weight for merging, thus diminishing the impact of insignificant experts. Moreover, we observed that our proposed merging promotes a low dimensionality in the merged expert's weight space, naturally paving the way for additional compression. Hence, our final method, MC-SMoE (i.e., Merge, then Compress SMoE), further decomposes the merged experts into low-rank and structural sparse alternatives. Extensive experiments across 8 benchmarks validate the effectiveness of MC-SMoE. For instance, our MC-SMoE achieves up to 80% memory and a 20% FLOPs reduction, with virtually no loss in performance.

Pingzhi Li, Zhenyu Zhang, Prateek Yadav, Yi-Lin Sung, Yu Cheng, Mohit Bansal, Tianlong Chen• 2023

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy53
1891
Language ModelingWikiText-2
Perplexity (PPL)12.76
1624
Mathematical ReasoningGSM8K
Accuracy90.1
1362
Commonsense ReasoningWinoGrande
Accuracy65
1085
Question AnsweringARC Challenge
Accuracy52.73
906
Language UnderstandingMMLU
Accuracy23
825
Question AnsweringARC Easy
Accuracy76.98
597
Question AnsweringOpenBookQA
Accuracy44.4
465
Natural Language InferenceRTE
Accuracy90.6
448
Multitask Language UnderstandingMMLU
Accuracy72.25
413
Showing 10 of 48 rows

Other info

Follow for update