Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Bit-by-Bit: Progressive QAT Strategy with Outlier Channel Splitting for Stable Low-Bit LLMs

About

Training LLMs at ultra-low precision remains a formidable challenge. Direct low-bit QAT often suffers from convergence instability and substantial training costs, exacerbated by quantization noise from heavy-tailed outlier channels and error accumulation across layers. To address these issues, we present Bit-by-Bit, a progressive QAT framework with outlier channel splitting. Our approach integrates three key components: (1) block-wise progressive training that reduces precision stage by stage, ensuring stable initialization for low-bit optimization; (2) nested structure of integer quantization grids to enable a "train once, deploy any precision" paradigm, allowing a single model to support multiple bit-widths without retraining; (3) rounding-aware outlier channel splitting, which mitigates quantization error while acting as an identity transform that preserves the quantized outputs. Furthermore, we follow microscaling groups with E4M3 scales, capturing dynamic activation ranges in alignment with OCP/NVIDIA standards. To address the lack of efficient 2-bit kernels, we developed custom operators for both W2A2 and W2A16 configurations, achieving up to 11$\times$ speedup over BF16. Under W2A2 settings, Bit-by-Bit significantly outperforms baselines like BitDistiller and EfficientQAT on both Llama2/3, achieving a loss of only 2.25 WikiText2 PPL compared to full-precision models.

Binxing Xu, Hao Gu, Lujun Li, Hao Wang, Bei Liu, Jiacheng Liu, Qiyuan Zhu, Xintong Yang, Chao Li, Sirui Han, Yike Guo• 2026

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity6.5
2839
Language ModelingC4
Perplexity8.33
1071
Instruction FollowingIFEval
IFEval Accuracy30
625
Mathematical ReasoningGSM8K
Accuracy84
312
Mathematical ReasoningMathQA--
305
Language UnderstandingMMLU
MMLU Accuracy75
77
Zero-shot EvaluationZero-shot Tasks
Task Avg Score73.51
10
Zero-shot Reasoning and Question AnsweringStandard Downstream Tasks PIQA, HellaSwag, Winogrande, ARC-Challenge, ARC-Easy
PIQA Zero-Shot Accuracy71.87
9
Showing 8 of 8 rows

Other info

Follow for update