Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Distilling BERT into Simple Neural Networks with Unlabeled Transfer Data

About

Recent advances in pre-training huge models on large amounts of text through self supervision have obtained state-of-the-art results in various natural language processing tasks. However, these huge and expensive models are difficult to use in practise for downstream tasks. Some recent efforts use knowledge distillation to compress these models. However, we see a gap between the performance of the smaller student models as compared to that of the large teacher. In this work, we leverage large amounts of in-domain unlabeled transfer data in addition to a limited amount of labeled training instances to bridge this gap for distilling BERT. We show that simple RNN based student models even with hard distillation can perform at par with the huge teachers given the transfer set. The student performance can be further improved with soft distillation and leveraging teacher intermediate representations. We show that our student models can compress the huge teacher by up to 26x while still matching or even marginally exceeding the teacher performance in low-resource settings with small amount of labeled data. Additionally, for the multilingual extension of this work with XtremeDistil (Mukherjee and Hassan Awadallah, 2020), we demonstrate massive distillation of multilingual BERT-like teacher models by upto 35x in terms of parameter compression and 51x in terms of latency speedup for batch inference while retaining 95% of its F1-score for NER over 41 languages.

Subhabrata Mukherjee, Ahmed Hassan Awadallah• 2019

Related benchmarks

TaskDatasetResultRank
Image ClassificationDTD
Accuracy65.42
419
Image ClassificationSVHN
Accuracy94.91
359
Image ClassificationFGVCAircraft
Accuracy49.22
225
Image ClassificationSTL-10
Accuracy97.01
33
Image ClassificationUSPS
Accuracy96.86
12
Image ClassificationMNIST
Accuracy98.57
9
Image ClassificationISIC 2020
Accuracy76.92
9
Showing 7 of 7 rows

Other info

Follow for update