Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Loss-aware Weight Quantization of Deep Networks

About

The huge size of deep networks hinders their use in small computing devices. In this paper, we consider compressing the network by weight quantization. We extend a recently proposed loss-aware weight binarization scheme to ternarization, with possibly different scaling parameters for the positive and negative weights, and m-bit (where m > 2) quantization. Experiments on feedforward and recurrent neural networks show that the proposed scheme outperforms state-of-the-art weight quantization algorithms, and is as accurate (or even more accurate) than the full-precision network.

Lu Hou, James T. Kwok• 2018

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-2 (test)
PPL16.91
1949
Language ModelingWikiText-103 (test)
Perplexity15.88
579
SummarizationXSum (test)
ROUGE-216.74
246
Language ModelingPenn Treebank (PTB) (test)
Perplexity15.87
120
Next Utterance PredictionPERSONA-CHAT (val)
Accuracy76.02
13
Showing 5 of 5 rows

Other info

Follow for update