Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ZeroQuant-V2: Exploring Post-training Quantization in LLMs from Comprehensive Study to Low Rank Compensation

About

Post-training quantization (PTQ) has emerged as a promising technique for mitigating memory consumption and computational costs in large language models (LLMs). However, a systematic examination of various quantization schemes, model families, and quantization bit precision has been absent from the literature. In this paper, we conduct a comprehensive analysis of these factors by investigating the effects of PTQ on weight-only, activation-only, and weight-and-activation quantization using diverse methods such as round-to-nearest (RTN), GPTQ, ZeroQuant, and their variants. We apply these methods to two distinct model families with parameters ranging from 125M to 176B. Our contributions include: (1) a sensitivity analysis revealing that activation quantization is generally more susceptible to weight quantization, with smaller models often outperforming larger models in terms of activation quantization; (2) an evaluation and comparison of existing PTQ methods to optimize model size reduction while minimizing the impact on accuracy, revealing that none of the current methods can achieve the original model quality for quantization with either INT4-weight or INT4-weight-and-INT8-activation; (3) based on these insights, we propose an optimized method called Low-Rank Compensation (LoRC), which employs low-rank matrices to enhance model quality recovery with a minimal increase in model size.

Zhewei Yao, Xiaoxia Wu, Cheng Li, Stephen Youn, Yuxiong He• 2023

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-2 (test)
PPL4.99
1949
Language ModelingC4
Perplexity7.86
1071
Question AnsweringARC Challenge
Accuracy46.67
906
Question AnsweringARC Easy--
597
Question AnsweringPIQA
Accuracy77.67
374
Question AnsweringBoolQ--
317
Sentence CompletionHellaSwag
Accuracy66.33
276
Word PredictionLAMBADA
Accuracy74
148
Visual Question AnsweringScienceQA (test)
Accuracy89.13
113
Pronoun ResolutionWinoGrande
Accuracy75
41
Showing 10 of 14 rows

Other info

Follow for update