Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SqueezeLLM: Dense-and-Sparse Quantization

About

Generative Large Language Models (LLMs) have demonstrated remarkable results for a wide range of tasks. However, deploying these models for inference has been a significant challenge due to their unprecedented resource requirements. This has forced existing deployment frameworks to use multi-GPU inference pipelines, which are often complex and costly, or to use smaller and less performant models. In this work, we demonstrate that the main bottleneck for generative inference with LLMs is memory bandwidth, rather than compute, specifically for single batch inference. While quantization has emerged as a promising solution by representing weights with reduced precision, previous efforts have often resulted in notable performance degradation. To address this, we introduce SqueezeLLM, a post-training quantization framework that not only enables lossless compression to ultra-low precisions of up to 3-bit, but also achieves higher quantization performance under the same memory constraint. Our framework incorporates two novel ideas: (i) sensitivity-based non-uniform quantization, which searches for the optimal bit precision assignment based on second-order information; and (ii) the Dense-and-Sparse decomposition that stores outliers and sensitive weight values in an efficient sparse format. When applied to the LLaMA models, our 3-bit quantization significantly reduces the perplexity gap from the FP16 baseline by up to 2.1x as compared to the state-of-the-art methods with the same memory requirement. Furthermore, when deployed on an A6000 GPU, our quantized models achieve up to 2.3x speedup compared to the baseline. Our code is available at https://github.com/SqueezeAILab/SqueezeLLM.

Sehoon Kim, Coleman Hooper, Amir Gholami, Zhen Dong, Xiuyu Li, Sheng Shen, Michael W. Mahoney, Kurt Keutzer• 2023

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-2 (test)
PPL3.77
1949
Commonsense ReasoningWinoGrande
Accuracy57.2
1085
Language ModelingC4
Perplexity20.21
1071
Multi-task Language UnderstandingMMLU
Accuracy45.5
876
Question AnsweringARC Easy--
597
Language ModelingC4 (val)
PPL6.82
514
Question AnsweringPIQA
Accuracy68.7
374
Commonsense ReasoningHellaSwag
HellaSwag Accuracy59
350
Language ModelingWiki2
PPL6.86
149
Question AnsweringARC Challenge
Accuracy (ARC)39.9
142
Showing 10 of 11 rows

Other info

Follow for update