Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SliderQuant: Accurate Post-Training Quantization for LLMs

About

In this paper, we address post-training quantization (PTQ) for large language models (LLMs) from an overlooked perspective: given a pre-trained high-precision LLM, the predominant sequential quantization framework treats different layers equally, but this may be not optimal in challenging bit-width settings. We empirically study the quantization impact of different layers on model accuracy, and observe that: (1) shallow/deep layers are usually more sensitive to quantization than intermediate layers; (2) among shallow/deep layers, the most sensitive one is the first/last layer, which exhibits significantly larger quantization error than others. These empirical observations imply that the quantization design for different layers of LLMs is required on multiple levels instead of a single level shared to all layers. Motivated by this, we propose a new PTQ framework termed Sliding-layer Quantization (SliderQuant) that relies on a simple adaptive sliding quantization concept facilitated by few learnable parameters. The base component of SliderQuant is called inter-layer sliding quantization, which incorporates three types of novel sliding window designs tailored for addressing the varying quantization sensitivity of shallow, intermediate and deep layers. The other component is called intra-layer sliding quantization that leverages an incremental strategy to quantize each window. As a result, SliderQuant has a strong ability to reduce quantization errors across layers. Extensive experiments on basic language generation, zero-shot commonsense reasoning and challenging math and code tasks with various LLMs, including Llama/Llama2/Llama3/Qwen2.5 model families, DeepSeek-R1 distilled models and large MoE models, show that our method outperforms existing PTQ methods (including the latest PTQ methods using rotation transformations) for both weight-only quantization and weight-activation quantization.

Shigeng Wang, Chao Li, Yangyuxuan Kang, Jiawei Fan, Zhonghong Ou, Anbang Yao• 2026

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity3.5
2839
Language ModelingC4
Perplexity5.87
1422
Language ModelingC4
Perplexity5.6
1071
Code GenerationHumanEval+
Pass@180.49
383
Commonsense ReasoningCommon Sense Reasoning Tasks
Avg Score67.51
316
Code GenerationMBPP+
Pass@169.05
216
Mathematical ReasoningAIME 2024
Pass@1 Accuracy76.67
165
Language GenerationWikiText2
Perplexity3.41
151
Language ModelingC4
C4 Loss6.78
121
Common Sense ReasoningCommon Sense Reasoning Tasks (ARC-C, ARC-E, BoolQ, HellaSwag, PIQA, WinoGrande) zero-shot
Average Accuracy (Zero-Shot)65.24
92
Showing 10 of 18 rows

Other info

Follow for update