Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

FlatQuant: Flatness Matters for LLM Quantization

About

Recently, quantization has been widely used for the compression and acceleration of large language models (LLMs). Due to the outliers in LLMs, it is crucial to flatten weights and activations to minimize quantization error with equally spaced quantization points. Prior research explores various pre-quantization transformations to suppress outliers, such as per-channel scaling and Hadamard transformation. However, we observe that these transformed weights and activations can still exhibit steep and dispersed distributions. In this paper, we propose FlatQuant (Fast and Learnable Affine Transformation), a new post-training quantization approach that enhances the flatness of weights and activations. Our approach identifies optimal affine transformations for each linear layer, calibrated in hours via a lightweight objective. To reduce runtime overhead of affine transformation, we apply Kronecker product with two lightweight matrices, and fuse all operations in FlatQuant into a single kernel. Extensive experiments demonstrate that FlatQuant establishes a new state-of-the-art benchmark for quantization. For example, it achieves less than 1\% accuracy drop for W4A4 quantization on the LLaMA-3-70B model, surpassing SpinQuant by 7.5\%. Additionally, it provides up to 2.3x prefill speedup and 1.7x decoding speedup compared to the FP16 model. Code is available at: https://github.com/ruikangliu/FlatQuant.

Yuxuan Sun, Ruikang Liu, Haoli Bai, Han Bao, Kang Zhao, Yuening Li, Jiaxin Hu, Xianzhi Yu, Lu Hou, Chun Yuan, Xin Jiang, Wulong Liu, Jun Yao• 2024

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity3.55
2839
Language ModelingWikiText-2
Perplexity (PPL)5.08
1624
Language ModelingC4
Perplexity5.91
1422
Code GenerationHumanEval--
1036
Language ModelingWikiText
PPL5.94
732
Multimodal UnderstandingMMBench
Accuracy81.61
637
Multimodal UnderstandingMMMU
Accuracy48.89
437
Multi-task Language UnderstandingMMLU
Accuracy81.52
321
Image Super-resolutionDRealSR
MANIQA0.4728
130
Massive Multitask Language UnderstandingMMLU
Accuracy59.34
117
Showing 10 of 32 rows

Other info

Follow for update