Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LLM-QAT: Data-Free Quantization Aware Training for Large Language Models

About

Several post-training quantization methods have been applied to large language models (LLMs), and have been shown to perform well down to 8-bits. We find that these methods break down at lower bit precision, and investigate quantization aware training for LLMs (LLM-QAT) to push quantization levels even further. We propose a data-free distillation method that leverages generations produced by the pre-trained model, which better preserves the original output distribution and allows quantizing any generative model independent of its training data, similar to post-training quantization methods. In addition to quantizing weights and activations, we also quantize the KV cache, which is critical for increasing throughput and support long sequence dependencies at current model sizes. We experiment with LLaMA models of sizes 7B, 13B, and 30B, at quantization levels down to 4-bits. We observe large improvements over training-free methods, especially in the low-bit settings.

Zechun Liu, Barlas Oguz, Changsheng Zhao, Ernie Chang, Pierre Stock, Yashar Mehdad, Yangyang Shi, Raghuraman Krishnamoorthi, Vikas Chandra• 2023

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity6.02
1875
Language ModelingWikiText-2 (test)
PPL5.48
1541
Language ModelingC4
Perplexity6.67
1182
Multi-task Language UnderstandingMMLU--
842
Language ModelingWikiText-2
Perplexity (PPL)5.12
841
Language ModelingC4 (val)
PPL6.3
392
Language ModelingWikiText2 (val)
Perplexity (PPL)7.3
277
Commonsense ReasoningCommon Sense Reasoning Tasks
Avg Score70.8
241
Language UnderstandingMMLU (test)
MMLU Average Accuracy58.5
136
Question AnsweringTriviaQA (test)
Accuracy70
121
Showing 10 of 19 rows

Other info

Follow for update