Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MoE-Pruner: Pruning Mixture-of-Experts Large Language Model using the Hints from Its Router

About

Mixture-of-Experts (MoE) architectures face challenges such as high memory consumption and redundancy in experts. Pruning MoE can reduce network weights while maintaining model performance. Motivated by the recent observation of emergent large magnitude features in Large Language Models (LLM) and MoE routing policy, we propose MoE-Pruner, a method that prunes weights with the smallest magnitudes multiplied by the corresponding input activations and router weights, on each output neuron. Our pruning method is one-shot, requiring no retraining or weight updates. We evaluate our method on Mixtral-8x7B and Mixtral-8x22B across multiple language benchmarks. Experimental results show that our pruning method significantly outperforms state-of-the-art LLM pruning methods. Furthermore, our pruned MoE models can benefit from a pretrained teacher model through expert-wise knowledge distillation, improving performance post-pruning. Experimental results demonstrate that the Mixtral-8x7B model with 50% sparsity maintains 99% of the performance of the original model after the expert-wise knowledge distillation.

Yanyue Xie, Zhi Zhang, Ding Zhou, Cong Xie, Ziang Song, Xin Liu, Yanzhi Wang, Xue Lin, An Xu• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningWinoGrande
Accuracy67.64
1085
Multiple-choice Question AnsweringARC Easy
Accuracy71.89
188
commonsense inferenceHellaSwag
Accuracy50.94
91
Zero-shot Language UnderstandingBoolQ, ARC-e, ARC-c, WinoGrande, HellaSwag
ARC-e Accuracy81.9
8
Natural Language ReasoningBoolQ, ARC-e, ARC-c, WinoGrande (WinoG), HellaSwag (HelloS)
BoolQ Accuracy69.14
4
Multiple-choice Question AnsweringARC-C
Accuracy40.02
4
Showing 6 of 6 rows

Other info

Follow for update