Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

GPTAQ: Efficient Finetuning-Free Quantization for Asymmetric Calibration

About

We introduce GPTAQ, a novel finetuning-free quantization method for compressing large-scale transformer architectures. Unlike the previous GPTQ method, which independently calibrates each layer, we always match the quantized layer's output to the exact output in the full-precision model, resulting in a scheme that we call asymmetric calibration. Such a scheme can effectively reduce the quantization error accumulated in previous layers. We analyze this problem using optimal brain compression to derive a close-formed solution. The new solution explicitly minimizes the quantization error as well as the accumulated asymmetry error. Furthermore, we utilize various techniques to parallelize the solution calculation, including channel parallelization, neuron decomposition, and Cholesky reformulation for matrix fusion. As a result, GPTAQ is easy to implement, simply using 20 more lines of code than GPTQ but improving its performance under low-bit quantization. Remarkably, on a single GPU, we quantize a 405B language transformer as well as EVA-02, the rank first vision transformer that achieves 90% pretraining Imagenet accuracy. Code is available at Github.

Yuhang Li, Ruokai Yin, Donghyun Lee, Shiting Xiao, Priyadarshini Panda• 2025

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity3.47
2839
Language ModelingWikiText-2 (test)
PPL5.01
1949
Language ModelingC4
Perplexity5.62
1422
Language ModelingC4
Perplexity7.34
1071
Science Question AnsweringScienceQA
Accuracy88.26
502
Visual Question AnsweringChartQA
Accuracy79.88
371
Chart Question AnsweringChartQA
Accuracy78.92
356
Visual Question AnsweringTextVQA (val)
VQA Score81.68
343
Language ModelingC4 (test)
Perplexity10.97
342
OCR EvaluationOCRBench
Score79.5
329
Showing 10 of 34 rows

Other info

Follow for update