Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference

About

Large-scale pre-trained language models such as BERT have brought significant improvements to NLP applications. However, they are also notorious for being slow in inference, which makes them difficult to deploy in real-time applications. We propose a simple but effective method, DeeBERT, to accelerate BERT inference. Our approach allows samples to exit earlier without passing through the entire model. Experiments show that DeeBERT is able to save up to ~40% inference time with minimal degradation in model quality. Further analyses show different behaviors in the BERT transformer layers and also reveal their redundancy. Our work provides new ideas to efficiently apply deep transformer-based models to downstream tasks. Code is available at https://github.com/castorini/DeeBERT.

Ji Xin, Raphael Tang, Jaejun Lee, Yaoliang Yu, Jimmy Lin• 2020

Related benchmarks

TaskDatasetResultRank
Image CaptioningMS COCO Karpathy (test)
CIDEr115.1
682
Natural Language InferenceSNLI (test)
Accuracy-3.5
681
Language ModelingPTB
Perplexity24.6
650
Natural Language UnderstandingGLUE (dev)
SST-2 (Acc)93.4
504
Natural Language UnderstandingGLUE
SST-291.5
452
Visual Question AnsweringOK-VQA (test)
Accuracy23.4
296
Sentiment AnalysisIMDB (test)
Accuracy-2.9
248
Visual EntailmentSNLI-VE (test)
Overall Accuracy78.8
197
Language ModelingWikiText-103
PPL10.1
146
Visual Question AnsweringGQA (test)
Accuracy27.8
119
Showing 10 of 32 rows

Other info

Follow for update