Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

HBLLM: Wavelet-Enhanced High-Fidelity 1-Bit Quantization for LLMs

About

We introduce HBLLM, a wavelet-enhanced high-fidelity $1$-bit post-training quantization method for Large Language Models (LLMs). By leveraging Haar wavelet transforms to enhance expressive capacity through frequency decomposition, HBLLM significantly improves quantization fidelity while maintaining minimal overhead. This approach features two innovative structure-aware grouping strategies: (1) frequency-aware multi-parameter intra-row grouping and (2) $\ell_2$-norm-based saliency-driven column selection. For non-salient weights, a shared mean is employed across quantization groups within each frequency band to optimize storage efficiency. Experiments conducted on the OPT and LLaMA models demonstrate that HBLLM achieves state-of-the-art performance in $1$-bit quantization, attaining a perplexity of $6.71$ on LLaMA$2$-$13$B with an average weight storage of only $1.08$ bits. Code available at: https://github.com/Yeyke/HBLLM.

Ningning Chen, Weicai Ye, Ying Jiang• 2025

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity8.82
1875
Language ModelingC4
Perplexity6.18
1182
Language ModelingPTB
Perplexity88.86
650
Commonsense ReasoningCommonsense Reasoning Suite BoolQ, PIQA, HellaSwag, WinoGrande, ARC-e, ARC-c
BoolQ Accuracy68.53
28
Robotic ManipulationSIMPLER Google Robot VA
Pick Up Coke Can Success Rate79.3
20
Robotic ManipulationSIMPLER Visual Matching
Pick Coke Success80.7
12
Question AnsweringAvgQA
AvgQA Score70.01
5
Showing 7 of 7 rows

Other info

Follow for update