Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FastBERT: a Self-distilling BERT with Adaptive Inference Time

About

Pre-trained language models like BERT have proven to be highly performant. However, they are often computationally expensive in many practical scenarios, for such heavy models can hardly be readily implemented with limited resources. To improve their efficiency with an assured model performance, we propose a novel speed-tunable FastBERT with adaptive inference time. The speed at inference can be flexibly adjusted under varying demands, while redundant calculation of samples is avoided. Moreover, this model adopts a unique self-distillation mechanism at fine-tuning, further enabling a greater computational efficacy with minimal loss in performance. Our model achieves promising results in twelve English and Chinese datasets. It is able to speed up by a wide range from 1 to 12 times than BERT if given different speedup thresholds to make a speed-performance tradeoff.

Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Haotang Deng, Qi Ju• 2020

Related benchmarks

TaskDatasetResultRank
Sentiment AnalysisIMDB (test)
Accuracy-2.5
248
Sentiment ClassificationSST-IMDb
Accuracy-0.023
12
Natural Language InferenceSciTail source: MRPC (test)
Accuracy-0.6
12
Sentiment ClassificationSST-Yelp
Accuracy-2.5
12
Natural Language InferenceSNLI source: MNLI (test)
Accuracy-1.3
12
Sentiment AnalysisYelp source: SST (test)
Accuracy-2.8
12
Paraphrase DetectionQQP source: RTE (test)
Accuracy-0.5
12
Showing 7 of 7 rows

Other info

Follow for update