Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

OJBKQ: Objective-Joint Babai-Klein Quantization

About

Post-training quantization (PTQ) is widely used to compress large language models without retraining. However, many existing weight-only methods rely on heuristic objectives and greedy rounding, thus leading to noticeable degradation under low-bit quantization. In this work, we introduce OJBKQ (Objective-Joint Babai-Klein Quantization with K-Best Sampling), a layer-wise PTQ method that formulates weight quantization as a joint optimization problem over activations and weights. This formulation results in a multiple-right-hand-side box-constrained integer least squares (BILS) problem in each layer, which is NP-hard. For each column of the weight matrix, we apply an extended Babai nearest-plane algorithm and an extended version of Klein's randomized Babai algorithm to find the minimum-residual Babai-Klein point, a sub-optimal solution to the BILS problem. Experimental results on large language models show that OJBKQ achieves lower perplexity at 3-4 bits compared to existing PTQ approaches, while maintaining comparable computational cost.

Xinyu Wang, Ziyu Zhao, Peng Lu, Yu Gu, Xiao-Wen Chang• 2026

Related benchmarks

TaskDatasetResultRank
Arithmetic ReasoningGSM8K
Accuracy87.87
155
Code GenerationMBPP
Accuracy (%)63.2
146
Common Sense ReasoningCommon Sense Reasoning Tasks (ARC-C, ARC-E, BoolQ, HellaSwag, PIQA, WinoGrande) zero-shot
Average Accuracy (Zero-Shot)70.66
72
Scientific ReasoningGPQA
Accuracy36.36
55
Showing 4 of 4 rows

Other info

Follow for update