Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Routing by Analogy: kNN-Augmented Expert Assignment for Mixture-of-Experts

About

Mixture-of-Experts (MoE) architectures scale large language models efficiently by employing a parametric "router" to dispatch tokens to a sparse subset of experts. Typically, this router is trained once and then frozen, rendering routing decisions brittle under distribution shifts. We address this limitation by introducing kNN-MoE, a retrieval-augmented routing framework that reuses optimal expert assignments from a memory of similar past cases. This memory is constructed offline by directly optimizing token-wise routing logits to maximize the likelihood on a reference set. Crucially, we use the aggregate similarity of retrieved neighbors as a confidence-driven mixing coefficient, thus allowing the method to fall back to the frozen router when no relevant cases are found. Experiments show kNN-MoE outperforms zero-shot baselines and rivals computationally expensive supervised fine-tuning.

Boxuan Lyu, Soichiro Murakami, Hidetaka Kamigaito, Peinan Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Language UnderstandingMMLU
Accuracy47.81
756
Question AnsweringGPQA
Accuracy29.8
258
Medical Question AnsweringMedMCQA (test)
Accuracy66.65
134
Question AnsweringMedQA-USMLE (test)
Accuracy76.7
101
Question AnsweringGPQA (test)
Accuracy45.45
55
Question AnsweringMMLU (test)
Accuracy78.86
15
Question AnsweringSuperGPQA (test)
Accuracy35.15
15
Language UnderstandingUSMLE
Accuracy35.04
3
Showing 8 of 8 rows

Other info

Follow for update