Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

BinaryBERT: Pushing the Limit of BERT Quantization

About

The rapid development of large pre-trained language models has greatly increased the demand for model compression techniques, among which quantization is a popular solution. In this paper, we propose BinaryBERT, which pushes BERT quantization to the limit by weight binarization. We find that a binary BERT is hard to be trained directly than a ternary counterpart due to its complex and irregular loss landscape. Therefore, we propose ternary weight splitting, which initializes BinaryBERT by equivalently splitting from a half-sized ternary network. The binary model thus inherits the good performance of the ternary one, and can be further enhanced by fine-tuning the new architecture after splitting. Empirical results show that our BinaryBERT has only a slight performance drop compared with the full-precision model while being 24x smaller, achieving the state-of-the-art compression results on the GLUE and SQuAD benchmarks.

Haoli Bai, Wei Zhang, Lu Hou, Lifeng Shang, Jing Jin, Xin Jiang, Qun Liu, Michael Lyu, Irwin King• 2020

Related benchmarks

TaskDatasetResultRank
Natural Language UnderstandingGLUE (test)--
416
SummarizationXsum
ROUGE-217.05
108
Natural Language UnderstandingGLUE (test dev)
MRPC Accuracy68.3
81
SummarizationCNN Daily Mail
ROUGE-140.66
67
Showing 4 of 4 rows

Other info

Follow for update