Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

UniQL: Unified Quantization and Low-rank Compression for Adaptive Edge LLMs

About

Deploying large language models (LLMs) on mobile platforms faces significant challenges due to the limited memory and shared computational resources of the device. Resource availability may be an issue as it is directly impacted by the current device workload, adding to the uncertainty of model deployment. We introduce UniQL, a unified post-training quantization and low-rank compression framework with on-device configurable pruning rates for edge LLMs. UniQL is a general framework that integrates quantization and low-rank compression for Transformers, State Space Models (SSMs), and hybrid models to support diverse edge applications. In our proposed joint framework, we introduce an efficient structured weight-sorting method that speeds up computation by 20x, quantization-aware singular value decomposition (SVD) to minimize quantization errors, state-aware weight sorting for SSMs, and a fused rotary positional embedding (RoPE) kernel for pruned models. Our framework performs weight-sorting, fine-tuning, and quantization in the cloud in a single-pass workflow, while enabling on-device configurable pruning rates up to 35%. Our experiments show that quantized and pruned models achieve a memory reduction of 4x-5.7x and a token-throughput improvement of 2.7x-3.4x, maintaining accuracy within 5% of the original models at 15% pruning across Transformers (Llama3 and Qwen2.5), SSMs (Mamba2), and hybrid models (Nemotron-H and Bamba-v2). The code and quantized models are available at: https://github.com/enyac-group/UniQL.

Hung-Yueh Chiang, Chi-Chih Chang, Yu-Chen Lu, Chien-Yu Lin, Kai-Chiang Wu, Mohamed S. Abdelfattah, Diana Marculescu• 2025

Related benchmarks

TaskDatasetResultRank
Zero-shot EvaluationDownstream Tasks Zero-shot
Accuracy75.1
278
Language UnderstandingMMLU 5-shot (test)
Accuracy70.3
149
Zero-shot EvaluationArcC, ArcE, HS, PiQA, WG (test val)
Average Accuracy73
61
Zero-shot Downstream Task EvaluationLM-EVAL (Average of HellaSwag, PIQA, ARC-Easy, ARC-Challenge, and WinoGrande) zero-shot latest
Average Accuracy74.9
30
Language Modeling AccuracyLLM Evaluation Benchmarks Zero-shot
Llama-2 7B Accuracy67.6
9
Model CompressionLlama 8B 3.1
Model Size (GB)2.8
7
Model CompressionQwen 7B 2.5
Model Size (GB)2.7
7
Model CompressionLlama 8B 3.1
Compression Time19
7
Coding TasksMBPP+ instruct latest (test)
Accuracy64.8
6
Commonsense ReasoningARC-e, ARC-c, PIQA, WinoG., HellaS.
ARC-e Accuracy76.05
6
Showing 10 of 18 rows

Other info

GitHub

Follow for update