Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Retraining-Free Merging of Sparse MoE via Hierarchical Clustering

About

Sparse Mixture-of-Experts (SMoE) models represent a significant advancement in large language model (LLM) development through their efficient parameter utilization. These models achieve substantial performance improvements at reduced inference costs. However, the deployment of SMoE models faces constraints from extensive memory requirements of expert components in resource-limited environments. To address these limitations, this paper introduces Hierarchical Clustering for Sparsely activated Mixture of Experts (HC-SMoE), a task-agnostic expert merging framework for parameter reduction without retraining. HC-SMoE introduces a novel hierarchical clustering approach based on expert outputs to ensure merging robustness independent of routing decisions. The proposed output-based clustering method enables effective capture of functional relationships between experts for large-scale architectures. We provide theoretical analysis and comprehensive evaluations across multiple zero-shot language tasks to demonstrate HC-SMoE's effectiveness in state-of-the-art models including Qwen and Mixtral. The experimental results validate HC-SMoE's superior performance and practical applicability for real-world deployments.

I-Chun Chen, Hsu-Shen Liu, Wei-Fang Sun, Chen-Hao Chao, Yen-Chang Hsu, Chun-Yi Lee• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy57.81
1460
Commonsense ReasoningWinoGrande
Accuracy72.06
776
Language UnderstandingMMLU
Accuracy48.95
756
Question AnsweringARC
Accuracy46.42
154
Large Language Model EvaluationOpenCompass
cMMLU45.62
11
ReasoningOpenCompass (test)
CMMLU45.11
11
Showing 6 of 6 rows

Other info

Follow for update