Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AffineQuant: Affine Transformation Quantization for Large Language Models

About

The significant resource requirements associated with Large-scale Language Models (LLMs) have generated considerable interest in the development of techniques aimed at compressing and accelerating neural networks. Among these techniques, Post-Training Quantization (PTQ) has emerged as a subject of considerable interest due to its noteworthy compression efficiency and cost-effectiveness in the context of training. Existing PTQ methods for LLMs limit the optimization scope to scaling transformations between pre- and post-quantization weights. In this paper, we advocate for the direct optimization using equivalent Affine transformations in PTQ (AffineQuant). This approach extends the optimization scope and thus significantly minimizing quantization errors. Additionally, by employing the corresponding inverse matrix, we can ensure equivalence between the pre- and post-quantization outputs of PTQ, thereby maintaining its efficiency and generalization capabilities. To ensure the invertibility of the transformation during optimization, we further introduce a gradual mask optimization method. This method initially focuses on optimizing the diagonal elements and gradually extends to the other elements. Such an approach aligns with the Levy-Desplanques theorem, theoretically ensuring invertibility of the transformation. As a result, significant performance improvements are evident across different LLMs on diverse datasets. To illustrate, we attain a C4 perplexity of 15.76 (2.26 lower vs 18.02 in OmniQuant) on the LLaMA2-7B model of W4A4 quantization without overhead. On zero-shot tasks, AffineQuant achieves an average of 58.61 accuracy (1.98 lower vs 56.63 in OmniQuant) when using 4/4-bit quantization for LLaMA-30B, which setting a new state-of-the-art benchmark for PTQ in LLMs.

Yuexiao Ma, Huixia Li, Xiawu Zheng, Feng Ling, Xuefeng Xiao, Rui Wang, Shilei Wen, Fei Chao, Rongrong Ji• 2024

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity7.35
1875
Language ModelingWikiText-2 (test)
PPL4.269
1541
Language ModelingC4
Perplexity6.803
1182
Language ModelingWikiText-2
Perplexity (PPL)11.45
841
Language ModelingPTB
Perplexity8.355
650
Language ModelingPTB (test)
Perplexity8.355
471
Language ModelingWikiText2 v1 (test)
Perplexity5.77
341
Language ModelingC4 (test)
Perplexity6.27
268
Question AnsweringEvaluation Suite (ARC, HellaSwag, MMLU) Zero-shot (test)
ARC-C49.66
67
Question AnsweringQA Suite Zero-shot (PIQA, ARC-E, ARC-C, BoolQ, HellaSwag, WinoGrande)
PIQA Accuracy70.84
47
Showing 10 of 23 rows

Other info

Follow for update