Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Multi-Aspect Knowledge Distillation for Language Model with Low-rank Factorization

About

Knowledge distillation is an effective technique for pre-trained language model compression. However, existing methods only focus on the knowledge distribution among layers, which may cause the loss of fine-grained information in the alignment process. To address this issue, we introduce the Multi-aspect Knowledge Distillation (MaKD) method, which mimics the self-attention and feed-forward modules in greater depth to capture rich language knowledge information at different aspects. Experimental results demonstrate that MaKD can achieve competitive performance compared with various strong baselines with the same storage parameter budget. In addition, our method also performs well in distilling auto-regressive architecture models.

Zihe Liu, Yulong Mao, Jinan Xu, Xinrui Peng, Kaiyu Huang• 2026

Related benchmarks

TaskDatasetResultRank
Natural Language UnderstandingGLUE (dev)--
518
Question AnsweringSQuAD v1.1 (dev)
F1 Score82.3
380
Question AnsweringSQuAD v2.0 (dev)
F172.9
163
Instruction FollowingVicuna
Rouge-L15.62
83
Instruction FollowingSelfInst
Rouge-L9.38
73
Instruction FollowingDolly
Rouge-L20.74
32
Question AnsweringSQuAD 1.1, 2.0 (dev)
SQuAD 1.1 EM/F1 Score74.9
5
Showing 7 of 7 rows

Other info

Follow for update