Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Rethinking Output Alignment For 1-bit Post-Training Quantization of Large Language Models

About

Large Language Models (LLMs) deliver strong performance across a wide range of NLP tasks, but their massive sizes hinder deployment on resource-constrained devices. To reduce their computational and memory burden, various compression techniques have been proposed, including quantization, pruning, and knowledge distillation. Among these, post-training quantization (PTQ) is widely adopted for its efficiency, as it requires no retraining and only a small dataset for calibration, enabling low-cost deployment. Recent advances for post-training quantization have demonstrated that even sub-4-bit methods can maintain most of the original model performance. However, 1-bit quantization that converts floating-point weights to \(\pm\)1, remains particularly challenging, as existing 1-bit PTQ methods often suffer from significant performance degradation compared to the full-precision models. Specifically, most of existing 1-bit PTQ approaches focus on weight alignment, aligning the full-precision model weights with those of the quantized models, rather than directly aligning their outputs. Although the output-matching approach objective is more intuitive and aligns with the quantization goal, naively applying it in 1-bit LLMs often leads to notable performance degradation. In this paper, we investigate why and under what conditions output-matching fails, in the context of 1-bit LLM quantization. Based on our findings, we propose a novel data-aware PTQ approach for 1-bit LLMs that explicitly accounts for activation error accumulation while keeping optimization efficient. Empirical experiments demonstrate that our solution consistently outperforms existing 1-bit PTQ methods with minimal overhead.

Dung Anh Hoang, Cuong Pham, Cuong Nguyen, Trung le, Jianfei Cai, Thanh-Toan Do• 2025

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity10.94
1875
Language ModelingC4
Perplexity13.15
1182
Language ModelingWikiText-2--
841
Language ModelingPTB
Perplexity16.75
650
Zero-shot Question AnsweringAveQA
Accuracy57.7
25
Language ModelingC4
Perplexity (LLaMA-2 7B/8B)19.25
6
Question AnsweringQA Benchmarks Zero-shot (BoolQ, Lambada, Piqa, OPQA, Winogrande, ARC-E, ARC-C, Hellaswag)
BoolQ Accuracy72.02
6
Language ModelingPTB
Perplexity (LLaMA-2 7/8B)3.17e+3
6
Showing 8 of 8 rows

Other info

Follow for update