Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Knowledge Localization in Mixture-of-Experts LLMs Using Cross-Lingual Inconsistency

About

Modern LLMs continue to exhibit significant variance in behavior across languages, such as being able to recall factual information in some languages but not others. While typically studied as a problem to be mitigated, in this work, we propose leveraging this cross-lingual inconsistency as a tool for interpretability in mixture-of-experts (MoE) LLMs. Our knowledge localization framework contrasts routing for sets of languages where the model correctly recalls information from languages where it fails. This allows us to isolate model components that play a functional role in answering about a piece of knowledge. Our method proceeds in two stages: (1) querying the model with difficult factual questions across a diverse set of languages to generate "success" and "failure" activation buckets and then (2) applying a statistical contrastive analysis to the MoE router logits to identify experts important for knowledge. To validate the necessity of this small number of experts for answering a knowledge question, we deactivate them and re-ask the question. We find that despite only deactivating about 20 out of 6000 experts, the model no longer answers correctly in over 40% of cases. Generally, this method provides a realistic and scalable knowledge localization approach to address increasingly complex LLMs.

Lucas Bandarkar, Alan Ansell, Trevor Cohn• 2026

Related benchmarks

TaskDatasetResultRank
Knowledge Attribution Causal AblationECLeKTic (test)
Ablation Success Rate43.7
6
Knowledge Attribution Causal AblationMultiLoKo (test)
Ablation Success Rate50.1
6
Knowledge Attribution Causal AblationG-MMLU (test)
Ablation Success Rate47.5
6
Knowledge Attribution Causal AblationTotal (Combined ECLeKTic, MultiLoKo, G-MMLU) (test)
Ablation Success Rate48
6
Showing 4 of 4 rows

Other info

Follow for update