$\mathcal{X}$-KD: General Experiential Knowledge Distillation for Large Language Models
About
Knowledge Distillation (KD) for Large Language Models (LLMs) has become increasingly important as models grow in size and complexity. While existing distillation approaches focus on imitating teacher behavior, they often overlook the original learning environment that shaped the teacher's knowledge. Inspired by the experiential learning theory and inverse reinforcement learning, we propose Experiential Knowledge Distillation ($\mathcal{X}$-KD), a novel and general framework that enables student models to learn in the teacher's original learning environment. $\mathcal{X}$-KD adopts the Approximated Variational Reward Imitation Learning (AVRIL) framework to jointly model the teacher's original reward function and perform policy distillation, encouraging consistency between the student policy and the original reward function. Our derivation demonstrates that $\mathcal{X}$-KD follows the supervised learning framework and applies to both sequence-level and divergence-based distillation methods, underlining the simplicity and flexibility of our approach. Empirical results show that $\mathcal{X}$-KD outperforms the generalized KD and MiniLLM baselines on abstractive summarization, machine translation, and arithmetic reasoning tasks. Additionally, $\mathcal{X}$-KD achieves better performance-diversity trade-off and data efficiency than baseline KD approaches.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Arithmetic Reasoning | GSM8K (test) | Accuracy13.63 | 129 | |
| Machine Translation | WMT En-De 14 (val) | BLEU27.61 | 20 | |
| Abstractive Summarization | XSum (val) | ROUGE-20.1624 | 16 | |
| Abstractive Summarization | Xsum | Win Rate48.3 | 1 | |
| Machine Translation | WMT | Win Rate43.8 | 1 | |
| Mathematical Reasoning | gsm | Win Rate48.9 | 1 |