Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

QuaRot: Outlier-Free 4-Bit Inference in Rotated LLMs

About

We introduce QuaRot, a new Quantization scheme based on Rotations, which is able to quantize LLMs end-to-end, including all weights, activations, and KV cache in 4 bits. QuaRot rotates LLMs in a way that removes outliers from the hidden state without changing the output, making quantization easier. This computational invariance is applied to the hidden state (residual) of the LLM, as well as to the activations of the feed-forward components, aspects of the attention mechanism, and to the KV cache. The result is a quantized model where all matrix multiplications are performed in 4 bits, without any channels identified for retention in higher precision. Our 4-bit quantized LLaMa2-70B model has losses of at most 0.47 WikiText-2 perplexity and retains 99% of the zero-shot performance. We also show that QuaRot can provide lossless 6 and 8 bit LLaMa2 models without any calibration data using round-to-nearest quantization. Code is available at: https://github.com/spcl/QuaRot.

Saleh Ashkboos, Amirkeivan Mohtashami, Maximilian L. Croci, Bo Li, Pashmina Cameron, Martin Jaggi, Dan Alistarh, Torsten Hoefler, James Hensman• 2024

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity3.41
2839
Language ModelingWikiText-2 (test)
PPL3.61
1949
Language ModelingWikiText-2
Perplexity (PPL)3.79
1624
Language ModelingC4
Perplexity6.12
1422
Commonsense ReasoningWinoGrande
Accuracy71.9
1085
Language ModelingC4
Perplexity10.59
1071
Code GenerationHumanEval--
1036
Language ModelingPTB
Perplexity36.1
1034
Multi-task Language UnderstandingMMLU
Accuracy63.3
876
Multimodal UnderstandingMMBench
Accuracy72.51
637
Showing 10 of 96 rows
...

Other info

Code

Follow for update