Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Do Domain-specific Experts exist in MoE-based LLMs?

About

In the era of Large Language Models (LLMs), the Mixture of Experts (MoE) architecture has emerged as an effective approach for training extremely large models with improved computational efficiency. This success builds upon extensive prior research aimed at enhancing expert specialization in MoE-based LLMs. However, the nature of such specializations and how they can be systematically interpreted remain open research challenges. In this work, we investigate this gap by posing a fundamental question: \textit{Do domain-specific experts exist in MoE-based LLMs?} To answer the question, we evaluate ten advanced MoE-based LLMs ranging from 3.8B to 120B parameters and provide empirical evidence for the existence of domain-specific experts. Building on this finding, we propose \textbf{Domain Steering Mixture of Experts (DSMoE)}, a training-free framework that introduces zero additional inference cost and outperforms both well-trained MoE-based LLMs and strong baselines, including Supervised Fine-Tuning (SFT). Experiments on four advanced open-source MoE-based LLMs across both target and non-target domains demonstrate that our method achieves strong performance and robust generalization without increasing inference cost or requiring additional retraining. Our implementation is publicly available at https://github.com/giangdip2410/Domain-specific-Experts.

Giang Do, Hung Le, Truyen Tran• 2026

Related benchmarks

TaskDatasetResultRank
Question AnsweringGPQA Diamond
Accuracy89.5
97
MathematicsAIME25
Accuracy78.6
63
MathAIME24
Accuracy77.3
38
Multi-task Language UnderstandingMMLU-Pro
Math Score87.5
16
Showing 4 of 4 rows

Other info

Follow for update