Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PoWER-BERT: Accelerating BERT Inference via Progressive Word-vector Elimination

About

We develop a novel method, called PoWER-BERT, for improving the inference time of the popular BERT model, while maintaining the accuracy. It works by: a) exploiting redundancy pertaining to word-vectors (intermediate encoder outputs) and eliminating the redundant vectors. b) determining which word-vectors to eliminate by developing a strategy for measuring their significance, based on the self-attention mechanism. c) learning how many word-vectors to eliminate by augmenting the BERT model and the loss function. Experiments on the standard GLUE benchmark shows that PoWER-BERT achieves up to 4.5x reduction in inference time over BERT with <1% loss in accuracy. We show that PoWER-BERT offers significantly better trade-off between accuracy and inference time compared to prior methods. We demonstrate that our method attains up to 6.8x reduction in inference time with <1% loss in accuracy when applied over ALBERT, a highly compressed version of BERT. The code for PoWER-BERT is publicly available at https://github.com/IBM/PoWER-BERT.

Saurabh Goyal, Anamitra R. Choudhury, Saurabh M. Raje, Venkatesan T. Chakaravarthy, Yogish Sabharwal, Ashish Verma• 2020

Related benchmarks

TaskDatasetResultRank
Natural Language UnderstandingGLUE
SST-292.1
452
Natural Language UnderstandingGLUE (test)
SST-2 Accuracy92.2
416
Sentiment AnalysisIMDB (test)
Accuracy92.2
248
Question AnsweringSQuAD 2.0
F175.7
190
Text ClassificationSST-2
Accuracy91.1
121
Text ClassificationIMDB
Accuracy92.5
107
Text Classification20News
Accuracy87.4
101
Sentiment AnalysisIMDB
Accuracy70
57
Image ClassificationImageNet ILSVRC2012 (val)
Top-1 Accuracy80.1
47
Sentiment AnalysisYelp
Accuracy93.6
30
Showing 10 of 19 rows

Other info

Follow for update