Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

GPTVQ: The Blessing of Dimensionality for LLM Quantization

About

In this work we show that the size versus accuracy trade-off of neural network quantization can be significantly improved by increasing the quantization dimensionality. We propose the GPTVQ method, a new fast method for post-training vector quantization (VQ) that scales well to Large Language Models (LLMs). Our method interleaves quantization of one or more columns with updates to the remaining unquantized weights, using information from the Hessian of the per-layer output reconstruction MSE. Quantization codebooks are initialized using an efficient data-aware version of the EM algorithm. The codebooks are then updated, and further compressed by using integer quantization and SVD-based compression. GPTVQ establishes a new state-of-the art in the size vs accuracy trade-offs on a wide range of LLMs such as Llama-v2 and Mistral. Furthermore, our method is efficient: on a single H100 it takes between 3 and 11 hours to process a Llamav2-70B model, depending on quantization setting. Lastly, with on-device timings for VQ decompression on a mobile CPU we show that VQ leads to improved latency compared to using a 4-bit integer format.

Mart van Baalen, Andrey Kuzmin, Ivan Koryakovskiy, Markus Nagel, Peter Couperus, Cedric Bastoul, Eric Mahurin, Tijmen Blankevoort, Paul Whatmough• 2024

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity4.27
1875
Language ModelingWikiText-2 (test)
PPL3.64
1541
Commonsense ReasoningHellaSwag
Accuracy77
1460
Commonsense ReasoningWinoGrande
Accuracy71.7
776
Commonsense ReasoningPIQA
Accuracy79.4
647
Language ModelingWikiText2 v1 (test)
Perplexity4.39
341
Question AnsweringARC-E--
242
Reading ComprehensionBoolQ
Accuracy79
219
Question AnsweringARC-C
Accuracy48.1
166
Zero-shot Question Answering and ReasoningAccuracy Tasks Zero-shot (AC, AE, WI, QA)
AC Score54.9
52
Showing 10 of 11 rows

Other info

Follow for update