Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs

About

Quantization of large language models (LLMs) faces significant challenges, particularly due to the presence of outlier activations that impede efficient low-bit representation. Traditional approaches predominantly address Normal Outliers, which are activations across all tokens with relatively large magnitudes. However, these methods struggle with smoothing Massive Outliers that display significantly larger values, which leads to significant performance degradation in low-bit quantization. In this paper, we introduce DuQuant, a novel approach that utilizes rotation and permutation transformations to more effectively mitigate both massive and normal outliers. First, DuQuant starts by constructing the rotation matrix, using specific outlier dimensions as prior knowledge, to redistribute outliers to adjacent channels by block-wise rotation. Second, We further employ a zigzag permutation to balance the distribution of outliers across blocks, thereby reducing block-wise variance. A subsequent rotation further smooths the activation landscape, enhancing model performance. DuQuant simplifies the quantization process and excels in managing outliers, outperforming the state-of-the-art baselines across various sizes and types of LLMs on multiple tasks, even with 4-bit weight-activation quantization. Our code is available at https://github.com/Hsu1023/DuQuant.

Haokun Lin, Haobo Xu, Yichen Wu, Jingzhi Cui, Yingtao Zhang, Linzhan Mou, Linqi Song, Zhenan Sun, Ying Wei• 2024

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity3.76
2839
Language ModelingWikiText-2
Perplexity (PPL)3.79
1624
Language ModelingC4
Perplexity5.85
1422
Language ModelingC4
Perplexity11.94
1071
Language ModelingPTB
Perplexity10.77
1034
Multi-task Language UnderstandingMMLU--
876
Robot ManipulationLIBERO
Goal Achievement71.5
700
Multimodal UnderstandingSEED-Bench
Accuracy66.15
343
Language ModelingC4 (test)
Perplexity5.49
342
Science Question AnsweringScienceQA IMG
Accuracy70.2
294
Showing 10 of 29 rows

Other info

Code

Follow for update