Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

BiLLM: Pushing the Limit of Post-Training Quantization for LLMs

About

Pretrained large language models (LLMs) exhibit exceptional general language processing capabilities but come with significant demands on memory and computational resources. As a powerful compression technology, binarization can extremely reduce model weights to a mere 1 bit, lowering the expensive computation and memory requirements. However, existing quantization techniques fall short of maintaining LLM performance under ultra-low bit-widths. In response to this challenge, we present BiLLM, a groundbreaking 1-bit post-training quantization scheme tailored for pretrained LLMs. Based on the weight distribution of LLMs, BiLLM first identifies and structurally selects salient weights, and minimizes the compression loss through an effective binary residual approximation strategy. Moreover, considering the bell-shaped distribution of the non-salient weights, we propose an optimal splitting search to group and binarize them accurately. BiLLM achieving for the first time high-accuracy inference (e.g. 8.41 perplexity on LLaMA2-70B) with only 1.08-bit weights across various LLMs families and evaluation metrics, outperforms SOTA quantization methods of LLM by significant margins. Moreover, BiLLM enables the binarization process of the LLM with 7 billion weights within 0.5 hours on a single GPU, demonstrating satisfactory time efficiency. Our code is available at https://github.com/Aaronhuang-778/BiLLM.

Wei Huang, Yangdong Liu, Haotong Qin, Ying Li, Shiming Zhang, Xianglong Liu, Michele Magno, Xiaojuan Qi• 2024

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity13.86
2839
Language ModelingWikiText-2 (test)
PPL8.37
1949
Commonsense ReasoningHellaSwag
Accuracy37.5
1891
Language ModelingWikiText-2
Perplexity (PPL)21.53
1624
Language ModelingC4
Perplexity9.26
1422
Commonsense ReasoningWinoGrande
Accuracy53.6
1085
Language ModelingPTB
Perplexity21.41
1034
Question AnsweringARC Challenge
Accuracy25.1
906
Commonsense ReasoningPIQA
Accuracy58.2
751
Language ModelingWikiText
PPL28.8
732
Showing 10 of 37 rows

Other info

Follow for update