Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LoaQ: Layer-wise Output Approximation Quantization

About

A natural and intuitive idea in model quantization is to approximate each component's quantized output to match its original. Motivated by this idea, most layer-wise post-training quantization (PTQ) methods focus on weight approximation at the linear-layer level. As a result, this local objective often yields insufficient approximations and practical deviations from the guiding intuition. Recent work has improved the approximation of linear-layer outputs within the layer-wise PTQ framework, but such refinements remain inadequate for achieving alignment with the full-model output. Based on a deeper understanding of the structure of mainstream LLMs, we propose LoaQ, which incorporates output-matching factors when quantizing linear layers within the layer-wise PTQ framework. It better aligns with this intuition and can feature a simple closed-form solution, making it orthogonal to existing techniques and readily integrable into existing quantization pipelines. Experiments on the LLaMA and Qwen model families demonstrate that LoaQ performs effectively in both weight-only and weight-activation quantization. By integrating seamlessly with existing quantization strategies, it further enhances overall quantization quality and shows strong potential to advance the frontier of post-training quantization.

Li Lin, Xiaojun Wan• 2025

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-2
Perplexity (PPL)4.463
841
Question AnsweringARC-E and PIQA (test)
Accuracy78.17
95
Showing 2 of 2 rows

Other info

Follow for update