Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Exploring Knowledge Purification in Multi-Teacher Knowledge Distillation for LLMs

About

Knowledge distillation has emerged as a pivotal technique for transferring knowledge from stronger large language models (LLMs) to smaller, more efficient models. However, traditional distillation approaches face challenges related to knowledge conflicts and high resource demands, particularly when leveraging multiple teacher models. In this paper, we introduce the concept of \textbf{Knowledge Purification}, which consolidates the rationales from multiple teacher LLMs into a single rationale, thereby mitigating conflicts and enhancing efficiency. To investigate the effectiveness of knowledge purification, we further propose five purification methods from various perspectives. Our experiments demonstrate that these methods not only improve the performance of the distilled model but also effectively alleviate knowledge conflicts. Moreover, router-based methods exhibit robust generalization capabilities, underscoring the potential of innovative purification techniques in optimizing multi-teacher distillation and facilitating the practical deployment of powerful yet lightweight models.

Ruihan Jin, Pengpeng Shao, Zhengqi Wen, Jinyang Wu, Mingkuan Feng, Shuo Yang, Chu Yuan Zhang, Jianhua Tao• 2026

Related benchmarks

TaskDatasetResultRank
Question AnsweringPubMedQA
Accuracy63.5
145
Multiple-choice Question AnsweringARC Challenge
Acc61.12
106
Multiple-choice Question AnsweringOBQA
Accuracy76.6
61
Multiple-choice Question AnsweringRiddleSense
Accuracy70.59
31
Multiple-choice Question AnsweringAverage (OBQA, ARC, Riddle, PQA)
Average Accuracy67.55
31
Biomedical ReasoningBioASQ out-of-domain
Accuracy91.87
25
Commonsense ReasoningPIQA out-of-domain
Accuracy69.53
25
Reasoning and Multitask Language UnderstandingOBQA, ARC, Riddle, PQA, and MMLU
OBQA Accuracy77
4
Showing 8 of 8 rows

Other info

Follow for update