Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

QuIP#: Even Better LLM Quantization with Hadamard Incoherence and Lattice Codebooks

About

Post-training quantization (PTQ) reduces the memory footprint of LLMs by quantizing their weights to low-precision. In this work, we introduce QuIP#, a weight-only PTQ method that achieves state-of-the-art results in extreme compression regimes ($\le$ 4 bits per weight) using three novel techniques. First, QuIP# improves QuIP's (Chee et al., 2023) incoherence processing by using the randomized Hadamard transform, which is faster and has better theoretical properties. Second, QuIP# uses vector quantization to take advantage of the ball-shaped sub-Gaussian distribution that incoherent weights possess: specifically, we introduce a set of hardware-efficient codebooks based on the highly symmetric $E_8$ lattice, which achieves the optimal 8-dimension unit ball packing. Third, QuIP# uses fine-tuning to improve fidelity to the original model. Our experiments show that QuIP# outperforms existing PTQ methods, enables new behaviors in PTQ scaling, and supports fast inference. Our code can be found at https://github.com/Cornell-RelaxML/quip-sharp.

Albert Tseng, Jerry Chee, Qingyao Sun, Volodymyr Kuleshov, Christopher De Sa• 2024

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity4.9
2839
Language ModelingWikiText-2 (test)
PPL2.99
1949
Commonsense ReasoningHellaSwag
Accuracy63.51
1891
Language ModelingWikiText-2
Perplexity (PPL)5.35
1624
Language ModelingC4
Perplexity7.2
1422
Commonsense ReasoningWinoGrande
Accuracy76.8
1085
Language ModelingC4
Perplexity8.36
1071
Question AnsweringARC Challenge
Accuracy39.5
906
Commonsense ReasoningPIQA
Accuracy78.45
751
Language ModelingWikiText
PPL6.19
732
Showing 10 of 61 rows

Other info

Follow for update