Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MiniLMv2: Multi-Head Self-Attention Relation Distillation for Compressing Pretrained Transformers

About

We generalize deep self-attention distillation in MiniLM (Wang et al., 2020) by only using self-attention relation distillation for task-agnostic compression of pretrained Transformers. In particular, we define multi-head self-attention relations as scaled dot-product between the pairs of query, key, and value vectors within each self-attention module. Then we employ the above relational knowledge to train the student model. Besides its simplicity and unified principle, more favorably, there is no restriction in terms of the number of student's attention heads, while most previous work has to guarantee the same head number between teacher and student. Moreover, the fine-grained self-attention relations tend to fully exploit the interaction knowledge learned by Transformer. In addition, we thoroughly examine the layer selection strategy for teacher models, rather than just relying on the last layer as in MiniLM. We conduct extensive experiments on compressing both monolingual and multilingual pretrained models. Experimental results demonstrate that our models distilled from base-size and large-size teachers (BERT, RoBERTa and XLM-R) outperform the state-of-the-art.

Wenhui Wang, Hangbo Bao, Shaohan Huang, Li Dong, Furu Wei• 2020

Related benchmarks

TaskDatasetResultRank
Natural Language UnderstandingGLUE (dev)
SST-2 (Acc)92.4
504
Question AnsweringSQuAD v2.0 (dev)
F182.3
158
Natural Language UnderstandingSuperGLUE (dev)
Average Score66.1
91
Question AnsweringXQuAD
F1 (ar)66.4
12
Natural Language InferenceMNLI-m (dev)
Accuracy87
12
Multilingual NLPHPLT Evaluation Set (da, de, en, fr, ga, hu, ur)
Performance (da)86.1
8
Multilingual NLPMultilingual Benchmark Average across languages (test)
Average Score83.46
8
Question AnsweringQuestion Answering (test)
Relative CPU Speed4.25
3
Showing 8 of 8 rows

Other info

Follow for update