Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

$\mathcal{X}$-KD: General Experiential Knowledge Distillation for Large Language Models

About

Knowledge Distillation (KD) for Large Language Models (LLMs) has become increasingly important as models grow in size and complexity. While existing distillation approaches focus on imitating teacher behavior, they often overlook the original learning environment that shaped the teacher's knowledge. Inspired by the experiential learning theory and inverse reinforcement learning, we propose Experiential Knowledge Distillation ($\mathcal{X}$-KD), a novel and general framework that enables student models to learn in the teacher's original learning environment. $\mathcal{X}$-KD adopts the Approximated Variational Reward Imitation Learning (AVRIL) framework to jointly model the teacher's original reward function and perform policy distillation, encouraging consistency between the student policy and the original reward function. Our derivation demonstrates that $\mathcal{X}$-KD follows the supervised learning framework and applies to both sequence-level and divergence-based distillation methods, underlining the simplicity and flexibility of our approach. Empirical results show that $\mathcal{X}$-KD outperforms the generalized KD and MiniLLM baselines on abstractive summarization, machine translation, and arithmetic reasoning tasks. Additionally, $\mathcal{X}$-KD achieves better performance-diversity trade-off and data efficiency than baseline KD approaches.

Yuang Cai, Yuyu Yuan• 2026

Related benchmarks

TaskDatasetResultRank
Arithmetic ReasoningGSM8K (test)
Accuracy13.63
129
Machine TranslationWMT En-De 14 (val)
BLEU27.61
20
Abstractive SummarizationXSum (val)
ROUGE-20.1624
16
Abstractive SummarizationXsum
Win Rate48.3
1
Machine TranslationWMT
Win Rate43.8
1
Mathematical Reasoninggsm
Win Rate48.9
1
Showing 6 of 6 rows

Other info

Follow for update