Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models

About

A pivotal advancement in the progress of large language models (LLMs) is the emergence of the Mixture-of-Experts (MoE) LLMs. Compared to traditional LLMs, MoE LLMs can achieve higher performance with fewer parameters, but it is still hard to deploy them due to their immense parameter sizes. Different from previous weight pruning methods that rely on specifically designed hardware, this paper mainly aims to enhance the deployment efficiency of MoE LLMs by introducing plug-and-play expert-level sparsification techniques. Specifically, we propose, for the first time to our best knowledge, post-training approaches for task-agnostic and task-specific expert pruning and skipping of MoE LLMs, tailored to improve deployment efficiency while maintaining model performance across a wide range of tasks. Extensive experiments show that our proposed methods can simultaneously reduce model sizes and increase the inference speed, while maintaining satisfactory performance. Data and code will be available at https://github.com/Lucky-Lance/Expert_Sparsity.

Xudong Lu, Qi Liu, Yuhui Xu, Aojun Zhou, Siyuan Huang, Bo Zhang, Junchi Yan, Hongsheng Li• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy57.66
1891
Language ModelingWikiText-2
Perplexity (PPL)6.81
1624
Commonsense ReasoningWinoGrande
Accuracy73
1085
Question AnsweringARC Challenge
Accuracy46
906
Language UnderstandingMMLU
Accuracy47.3
825
Question AnsweringARC Easy
Accuracy73
597
Physical Commonsense ReasoningPIQA
Accuracy77
572
Question AnsweringOpenBookQA
Accuracy35
465
Natural Language InferenceRTE
Accuracy93.5
448
Video UnderstandingMVBench
Accuracy66.73
425
Showing 10 of 42 rows

Other info

Follow for update