Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Compression of Generative Pre-trained Language Models via Quantization

About

The increasing size of generative Pre-trained Language Models (PLMs) has greatly increased the demand for model compression. Despite various methods to compress BERT or its variants, there are few attempts to compress generative PLMs, and the underlying difficulty remains unclear. In this paper, we compress generative PLMs by quantization. We find that previous quantization methods fail on generative tasks due to the \textit{homogeneous word embeddings} caused by reduced capacity, and \textit{varied distribution of weights}. Correspondingly, we propose a token-level contrastive distillation to learn distinguishable word embeddings, and a module-wise dynamic scaling to make quantizers adaptive to different modules. Empirical results on various tasks show that our proposed method outperforms the state-of-the-art compression methods on generative PLMs by a clear margin. With comparable performance with the full-precision models, we achieve 14.4x and 13.4x compression rates on GPT-2 and BART, respectively.

Chaofan Tao, Lu Hou, Wei Zhang, Lifeng Shang, Xin Jiang, Qun Liu, Ping Luo, Ngai Wong• 2022

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-2 (test)
PPL15.3
1541
Language ModelingPTB
Perplexity11.2
650
Language ModelingWikiText-103 (test)
Perplexity14.58
524
Language ModelingPTB (test)
Perplexity12.22
471
Natural Language UnderstandingGLUE (test)
SST-2 Accuracy93.57
416
SummarizationXSum (test)
ROUGE-217.78
231
Language ModelingPenn Treebank (PTB) (test)
Perplexity14.9
120
SummarizationXsum
ROUGE-217.78
108
Next Utterance PredictionPERSONA-CHAT (val)
Accuracy76.57
13
Arithmetic ReasoningGSM8K
ACC25.47
10
Showing 10 of 14 rows

Other info

Follow for update