Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

QTIP: Quantization with Trellises and Incoherence Processing

About

Post-training quantization (PTQ) reduces the memory footprint of LLMs by quantizing weights to low-precision datatypes. Since LLM inference is usually memory-bound, PTQ methods can improve inference throughput. Recent state-of-the-art PTQ approaches use vector quantization (VQ) to quantize multiple weights at once, which improves information utilization through better shaping. However, VQ requires a codebook with size exponential in the dimension. This limits current VQ-based PTQ works to low VQ dimensions ($\le 8$) that in turn limit quantization quality. Here, we introduce QTIP, which instead uses trellis coded quantization (TCQ) to achieve ultra-high-dimensional quantization. TCQ uses a stateful decoder that separates the codebook size from the bitrate and effective dimension. QTIP introduces a spectrum of lookup-only to computed lookup-free trellis codes designed for a hardware-efficient "bitshift" trellis structure; these codes achieve state-of-the-art results in both quantization quality and inference speed.

Albert Tseng, Qingyao Sun, David Hou, Christopher De Sa• 2024

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity3.16
2839
Language ModelingWikiText-2 (test)
PPL2.75
1949
Commonsense ReasoningHellaSwag
Accuracy60.8
1891
Language ModelingWikiText-2
Perplexity (PPL)5.11
1624
Language ModelingC4
Perplexity5
1422
Language ModelingC4
Perplexity7.99
1071
Language ModelingWikiText
PPL5.86
732
ReasoningBBH
Accuracy36.27
672
Instruction FollowingIFEval
IFEval Accuracy25.74
625
Language ModelingC4 (val)
PPL5.83
514
Showing 10 of 55 rows

Other info

Code

Follow for update