Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Towards Next-Level Post-Training Quantization of Hyper-Scale Transformers

About

With the increasing complexity of generative AI models, post-training quantization (PTQ) has emerged as a promising solution for deploying hyper-scale models on edge devices such as mobile and TVs. Existing PTQ schemes, however, consume considerable time and resources, which could be a bottleneck in real situations where frequent model updates and multiple hyperparameter tunings are required. As a cost-effective alternative, learning-free PTQ schemes have been proposed. However, the performance is somewhat limited because they cannot consider the inter-layer dependency within the attention module, which is a significant feature of Transformers. In this paper, we thus propose a novel PTQ algorithm that balances accuracy and efficiency. The key idea of the proposed algorithm called aespa is to perform quantization layer-wise for efficiency while targeting attention-wise reconstruction to consider the cross-layer dependency. Through extensive experiments on various language models and complexity analysis, we demonstrate that aespa is accurate and efficient in quantizing Transformer models.

Junhan Kim, Chungman Lee, Eulrang Cho, Kyungphil Park, Ho-young Kim, Joonyoung Kim, Yongkweon Jeon• 2024

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity4.254
2839
Language ModelingWikiText-2 (test)
PPL4.254
1949
Language ModelingC4
Perplexity6.256
1422
Language ModelingPTB
Perplexity8.283
1034
Language ModelingPTB (test)
Perplexity8.283
526
Language ModelingC4 (test)
Perplexity6.256
342
Question AnsweringEvaluation Suite (ARC, HellaSwag, MMLU) Zero-shot (test)
ARC-C50.34
67
QuantizationOPT
Processing Time (s)74.4
46
QuantizationLLAMA
Processing Time (hr)6.84
30
QuantizationOPT v1 (train)
Processing Time (min)1.24
23
Showing 10 of 19 rows

Other info

Code

Follow for update