Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FlatQuant: Flatness Matters for LLM Quantization

About

Recently, quantization has been widely used for the compression and acceleration of large language models (LLMs). Due to the outliers in LLMs, it is crucial to flatten weights and activations to minimize quantization error with equally spaced quantization points. Prior research explores various pre-quantization transformations to suppress outliers, such as per-channel scaling and Hadamard transformation. However, we observe that these transformed weights and activations can still exhibit steep and dispersed distributions. In this paper, we propose FlatQuant (Fast and Learnable Affine Transformation), a new post-training quantization approach that enhances the flatness of weights and activations. Our approach identifies optimal affine transformations for each linear layer, calibrated in hours via a lightweight objective. To reduce runtime overhead of affine transformation, we apply Kronecker product with two lightweight matrices, and fuse all operations in FlatQuant into a single kernel. Extensive experiments demonstrate that FlatQuant establishes a new state-of-the-art benchmark for quantization. For example, it achieves less than 1\% accuracy drop for W4A4 quantization on the LLaMA-3-70B model, surpassing SpinQuant by 7.5\%. Additionally, it provides up to 2.3x prefill speedup and 1.7x decoding speedup compared to the FP16 model. Code is available at: https://github.com/ruikangliu/FlatQuant.

Yuxuan Sun, Ruikang Liu, Haoli Bai, Han Bao, Kang Zhao, Yuening Li, Jiaxin Hu, Xianzhi Yu, Lu Hou, Chun Yuan, Xin Jiang, Wulong Liu, Jun Yao• 2024

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity7.81
1875
Language ModelingC4
Perplexity7.25
1182
Language ModelingWikiText-2
Perplexity (PPL)6.51
841
Reasoning7-benchmark commonsense and reading-comprehension suite (ARC-Easy, ARC-Challenge, HellaSwag, WinoGrande, PIQA, BoolQ, and OpenBookQA) LM Evaluation Harness default (test)
Accuracy67.8
108
Image Super-resolutionDRealSR
MANIQA0.4728
78
Code ReasoningLiveCodeBench
Accuracy29.1
46
Mathematical ReasoningAIME-120
Accuracy32.78
35
Real-world Image Super-ResolutionRealLR200
MUSIQ56.47
26
Real-world Image Super-ResolutionRealLQ250
MUSIQ0.5776
26
Real-world Image Super-ResolutionRealSR
LPIPS0.6871
23
Showing 10 of 16 rows

Other info

Follow for update