Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models

About

Large language models (LLMs) show excellent performance but are compute- and memory-intensive. Quantization can reduce memory and accelerate inference. However, existing methods cannot maintain accuracy and hardware efficiency at the same time. We propose SmoothQuant, a training-free, accuracy-preserving, and general-purpose post-training quantization (PTQ) solution to enable 8-bit weight, 8-bit activation (W8A8) quantization for LLMs. Based on the fact that weights are easy to quantize while activations are not, SmoothQuant smooths the activation outliers by offline migrating the quantization difficulty from activations to weights with a mathematically equivalent transformation. SmoothQuant enables an INT8 quantization of both weights and activations for all the matrix multiplications in LLMs, including OPT, BLOOM, GLM, MT-NLG, Llama-1/2, Falcon, Mistral, and Mixtral models. We demonstrate up to 1.56x speedup and 2x memory reduction for LLMs with negligible loss in accuracy. SmoothQuant enables serving 530B LLM within a single node. Our work offers a turn-key solution that reduces hardware costs and democratizes LLMs. Code is available at https://github.com/mit-han-lab/smoothquant.

Guangxuan Xiao, Ji Lin, Mickael Seznec, Hao Wu, Julien Demouth, Song Han• 2022

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity5.18
1875
Language ModelingWikiText-2 (test)
PPL8.6
1541
Commonsense ReasoningHellaSwag
Accuracy79.21
1460
Language ModelingC4
Perplexity6.76
1182
Visual Question AnsweringTextVQA
Accuracy70.4
1117
Visual Question AnsweringVizWiz
Accuracy69.7
1043
Multi-task Language UnderstandingMMLU
Accuracy69.79
842
Language ModelingWikiText-2
Perplexity (PPL)7.09
841
Mathematical ReasoningGSM8K (test)
Accuracy60.2
797
Language ModelingPTB
Perplexity11.69
650
Showing 10 of 64 rows

Other info

Follow for update