Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Diversifying the Expert Knowledge for Task-Agnostic Pruning in Sparse Mixture-of-Experts

About

By increasing model parameters but activating them sparsely when performing a task, the use of Mixture-of-Experts (MoE) architecture significantly improves the performance of Large Language Models (LLMs) without increasing the inference cost. However, the memory consumption due to the growing number of experts presents a challenge to the deployment of these models in many real world settings. Our empirical study reveals that some experts encode redundant knowledge during pre-training. We thus propose a method of grouping and pruning similar experts to improve the model's parameter efficiency. We validate the effectiveness of our method by pruning three state-of-the-art MoE architectures, including Mixtral, Deepseek-MoE, and Qwen. The evaluation shows that our method outperforms other model pruning methods on a range of natural language tasks. We will release our code to facilitate future research.

Zeliang Zhang, Xiaodong Liu, Hao Cheng, Chenliang Xu, Jianfeng Gao• 2024

Related benchmarks

TaskDatasetResultRank
Question AnsweringOpenBookQA
Accuracy35.8
465
Natural Language InferenceRTE
Accuracy71.1
367
Question AnsweringBoolQ--
240
Reading ComprehensionBoolQ
Accuracy88
219
Language UnderstandingMMLU
Humanities Avg63.7
33
General Language EvaluationAggregated MMLU, BoolQ, OpenBookQA, RTE
Average Accuracy67.6
22
Showing 6 of 6 rows

Other info

Follow for update