Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Language model compression with weighted low-rank factorization

About

Factorizing a large matrix into small matrices is a popular strategy for model compression. Singular value decomposition (SVD) plays a vital role in this compression strategy, approximating a learned matrix with fewer parameters. However, SVD minimizes the squared error toward reconstructing the original matrix without gauging the importance of the parameters, potentially giving a larger reconstruction error for those who affect the task accuracy more. In other words, the optimization objective of SVD is not aligned with the trained model's task accuracy. We analyze this previously unexplored problem, make observations, and address it by introducing Fisher information to weigh the importance of parameters affecting the model prediction. This idea leads to our method: Fisher-Weighted SVD (FWSVD). Although the factorized matrices from our approach do not result in smaller reconstruction errors, we find that our resulting task accuracy is much closer to the original model's performance. We perform analysis with the transformer-based language models, showing our weighted SVD largely alleviates the mismatched optimization objectives and can maintain model performance with a higher compression rate. Our method can directly compress a task-specific model while achieving better performance than other compact model strategies requiring expensive model pre-training. Moreover, the evaluation of compressing an already compact model shows our method can further reduce 9% to 30% parameters with an insignificant impact on task accuracy.

Yen-Chang Hsu, Ting Hua, Sungen Chang, Qian Lou, Yilin Shen, Hongxia Jin• 2022

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity2.00e+4
1875
Language ModelingWikiText-2 (test)
PPL15.98
1541
Language ModelingC4
Perplexity2.00e+3
1182
Language ModelingPTB
Perplexity1.00e+4
650
Language ModelingC4 (test)
Perplexity1.51e+3
268
Zero-shot ReasoningARC-e, Winogrande, HellaSwag, PIQA--
36
Commonsense Reasoning7 reasoning datasets (OpenbookQA, ARC_e, WinoGrande, HellaSwag, ARC_c, PIQA, MathQA) (test)
Overall Average Accuracy32
28
Zero-shot ReasoningEvaluation Suite Zero-shot (OpenbookQA, ARC-e, ARC-c, WinoGrande, HellaSwag, PIQA, MathQA)
Average Accuracy2
24
Commonsense and Mathematical ReasoningReasoning and Math Suite (OpenBookQA, ARC-e, WinoGrande, HellaSwag, ARC-c, PIQA, MathQA, GSM8K)
OpenBookQA Acc17
21
Language ModelingWikiText-2, PTB, C4
WikiText-2 Perplexity8.06e+3
19
Showing 10 of 12 rows

Other info

Follow for update