Multi-Aspect Knowledge Distillation for Language Model with Low-rank Factorization
About
Knowledge distillation is an effective technique for pre-trained language model compression. However, existing methods only focus on the knowledge distribution among layers, which may cause the loss of fine-grained information in the alignment process. To address this issue, we introduce the Multi-aspect Knowledge Distillation (MaKD) method, which mimics the self-attention and feed-forward modules in greater depth to capture rich language knowledge information at different aspects. Experimental results demonstrate that MaKD can achieve competitive performance compared with various strong baselines with the same storage parameter budget. In addition, our method also performs well in distilling auto-regressive architecture models.
Zihe Liu, Yulong Mao, Jinan Xu, Xinrui Peng, Kaiyu Huang• 2026
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Natural Language Understanding | GLUE (dev) | -- | 518 | |
| Question Answering | SQuAD v1.1 (dev) | F1 Score82.3 | 380 | |
| Question Answering | SQuAD v2.0 (dev) | F172.9 | 163 | |
| Instruction Following | Vicuna | Rouge-L15.62 | 83 | |
| Instruction Following | SelfInst | Rouge-L9.38 | 73 | |
| Instruction Following | Dolly | Rouge-L20.74 | 32 | |
| Question Answering | SQuAD 1.1, 2.0 (dev) | SQuAD 1.1 EM/F1 Score74.9 | 5 |
Showing 7 of 7 rows