Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TernaryBERT: Distillation-aware Ultra-low Bit BERT

About

Transformer-based pre-training models like BERT have achieved remarkable performance in many natural language processing tasks.However, these models are both computation and memory expensive, hindering their deployment to resource-constrained devices. In this work, we propose TernaryBERT, which ternarizes the weights in a fine-tuned BERT model. Specifically, we use both approximation-based and loss-aware ternarization methods and empirically investigate the ternarization granularity of different parts of BERT. Moreover, to reduce the accuracy degradation caused by the lower capacity of low bits, we leverage the knowledge distillation technique in the training process. Experiments on the GLUE benchmark and SQuAD show that our proposed TernaryBERT outperforms the other BERT quantization methods, and even achieves comparable performance as the full-precision model while being 14.9x smaller.

Wei Zhang, Lu Hou, Yichun Yin, Lifeng Shang, Xiao Chen, Xin Jiang, Qun Liu• 2020

Related benchmarks

TaskDatasetResultRank
Natural Language UnderstandingGLUE (dev)
SST-2 (Acc)93
504
Natural Language UnderstandingGLUE (test)--
416
Question AnsweringSQuAD v1.1 (dev)
F1 Score93.1
375
Question AnsweringSQuAD v2.0 (dev)
F180.5
158
SummarizationXsum
ROUGE-22.23
108
SummarizationCNN Daily Mail
ROUGE-110.95
67
Natural Language InferenceMNLI (dev)
Acc (m)86.9
44
Showing 7 of 7 rows

Other info

Follow for update