PoWER-BERT: Accelerating BERT Inference via Progressive Word-vector Elimination
About
We develop a novel method, called PoWER-BERT, for improving the inference time of the popular BERT model, while maintaining the accuracy. It works by: a) exploiting redundancy pertaining to word-vectors (intermediate encoder outputs) and eliminating the redundant vectors. b) determining which word-vectors to eliminate by developing a strategy for measuring their significance, based on the self-attention mechanism. c) learning how many word-vectors to eliminate by augmenting the BERT model and the loss function. Experiments on the standard GLUE benchmark shows that PoWER-BERT achieves up to 4.5x reduction in inference time over BERT with <1% loss in accuracy. We show that PoWER-BERT offers significantly better trade-off between accuracy and inference time compared to prior methods. We demonstrate that our method attains up to 6.8x reduction in inference time with <1% loss in accuracy when applied over ALBERT, a highly compressed version of BERT. The code for PoWER-BERT is publicly available at https://github.com/IBM/PoWER-BERT.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Natural Language Understanding | GLUE | SST-292.1 | 452 | |
| Natural Language Understanding | GLUE (test) | SST-2 Accuracy92.2 | 416 | |
| Sentiment Analysis | IMDB (test) | Accuracy92.2 | 248 | |
| Question Answering | SQuAD 2.0 | F175.7 | 190 | |
| Text Classification | SST-2 | Accuracy91.1 | 121 | |
| Text Classification | IMDB | Accuracy92.5 | 107 | |
| Text Classification | 20News | Accuracy87.4 | 101 | |
| Sentiment Analysis | IMDB | Accuracy70 | 57 | |
| Image Classification | ImageNet ILSVRC2012 (val) | Top-1 Accuracy80.1 | 47 | |
| Sentiment Analysis | Yelp | Accuracy93.6 | 30 |