Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DIVE into MoE: Diversity-Enhanced Reconstruction of Large Language Models from Dense into Mixture-of-Experts

About

Large language models (LLMs) with the Mixture-of-Experts (MoE) architecture achieve high cost-efficiency by selectively activating a subset of the parameters. Despite the inference efficiency of MoE LLMs, the training of extensive experts from scratch incurs substantial overhead, whereas reconstructing a dense LLM into an MoE LLM significantly reduces the training budget. However, existing reconstruction methods often overlook the diversity among experts, leading to potential redundancy. In this paper, we come up with the observation that a specific LLM exhibits notable diversity after being pruned on different calibration datasets, based on which we present a Diversity-Enhanced reconstruction method named DIVE. The recipe of DIVE includes domain affinity mining, pruning-based expert reconstruction, and efficient retraining. Specifically, the reconstruction includes pruning and reassembly of the feed-forward network (FFN) module. After reconstruction, we efficiently retrain the model on routers, experts and normalization modules. We implement DIVE on Llama-style LLMs with open-source training corpora. Experiments show that DIVE achieves training efficiency with minimal accuracy trade-offs, outperforming existing pruning and MoE reconstruction methods with the same number of activated parameters.

Yuchen Feng, Bowen Shen, Naibin Gu, Jiaxuan Zhao, Peng Fu, Zheng Lin, Weiping Wang• 2025

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity13.23
1875
Multi-task Language UnderstandingMMLU
Accuracy26.78
842
Language ModelingWikiText2 v1 (test)--
341
Language ModelingLAMBADA
Perplexity24.15
99
Language ModelingLAMBADA (test)--
71
Downstream Task11 Downstream Tasks Aggregate
Average Accuracy42.52
32
Downstream TaskHellaSwag
Accuracy37.09
13
Downstream TaskLogiQA
Accuracy22.12
7
Downstream TaskSciQ
Accuracy83
7
Downstream TaskPIQA
Accuracy68.12
7
Showing 10 of 16 rows

Other info

Code

Follow for update