Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Post-Training Quantization of OpenPangu Models for Efficient Deployment on Atlas A2

About

Huawei's openPangu-Embedded-1B and openPangu-Embedded-7B are variants of the openPangu large language model, designed for efficient deployment on Ascend NPUs. The 7B variant supports three distinct Chain-of-Thought (CoT) reasoning paradigms, namely slow_think, auto_think, and no_think, while the 1B variant operates exclusively in the no_think mode, which employs condensed reasoning for higher efficiency. Although CoT reasoning enhances capability, the generation of extended reasoning traces introduces substantial memory and latency overheads, posing challenges for practical deployment on Ascend NPUs. This paper addresses these computational constraints by leveraging low-bit quantization, which transforms FP16 computations into more efficient integer arithmetic. We introduce a unified low-bit inference framework, supporting INT8 (W8A8) and W4A8 quantization, specifically optimized for openPangu-Embedded models on the Atlas A2. Our comprehensive evaluation on code generation benchmarks (HumanEval and MBPP) demonstrates the efficacy of this approach. INT8 quantization consistently preserves over 90\% of the FP16 baseline accuracy and achieves a 1.5x prefill speedup on the Atlas A2. Furthermore, W4A8 quantization significantly reduces memory consumption, albeit with a moderate trade-off in accuracy. These findings collectively indicate that low-bit quantization effectively facilitates efficient CoT reasoning on Ascend NPUs, maintaining high model fidelity.

Yilun Luo, Huaqing Zheng, Haoqian Meng, Wenyuan Liu, Peng Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval--
850
Code GenerationMBPP
Accuracy (%)83.27
146
Inference EfficiencyopenPangu Embedded Efficiency Benchmark
Prefill Latency (ms)528
10
Showing 3 of 3 rows

Other info

Follow for update