Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

The Geometry of LLM Quantization: GPTQ as Babai's Nearest Plane Algorithm

About

Quantizing the weights of large language models (LLMs) from 16-bit to lower bitwidth is the de facto approach to deploy massive transformers onto more affordable accelerators. While GPTQ emerged as one of the standard methods for one-shot post-training quantization at LLM scale, its inner workings are described as a sequence of algebraic updates that obscure geometric meaning or worst-case guarantees. In this work, we show that, when executed back-to-front (from the last to first dimension) for a linear layer, GPTQ is mathematically identical to Babai's nearest plane algorithm for the classical closest vector problem (CVP) on a lattice defined by the Hessian matrix of the layer's inputs. This equivalence is based on a sophisticated mathematical argument, and has two analytical consequences: first, the GPTQ error propagation step gains an intuitive geometric interpretation; second, GPTQ inherits the error upper bound of Babai's algorithm under the assumption that no weights are clipped. Leveraging this bound, we design post-training quantization methods that avoid clipping, and outperform the original GPTQ. In addition, we provide efficient GPU inference kernels for the resulting representation. Taken together, these results place GPTQ on a firm theoretical footing and open the door to importing decades of progress in lattice algorithms towards the design of future quantization algorithms for billion-parameter models. Source code is available at https://github.com/IST-DASLab/GPTQ-Babai.

Jiale Chen, Yalda Shabanzadeh, Elvir Crn\v{c}evi\'c, Torsten Hoefler, Dan Alistarh• 2025

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy60.98
1891
Language ModelingWikiText-2
Perplexity (PPL)2
1624
Question AnsweringARC Challenge--
906
Commonsense ReasoningPIQA
Accuracy72.91
751
Question AnsweringARC Easy
Accuracy60.4
597
Physical Commonsense ReasoningPIQA
Accuracy77.53
572
Language ModelingC4 (val)
PPL14.64
514
Multitask Language UnderstandingMMLU
Accuracy67.91
413
Language ModelingWikiText2 (val)
Perplexity (PPL)11.27
387
Commonsense ReasoningWinoGrande
Accuracy69.53
372
Showing 10 of 31 rows

Other info

Follow for update