Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Universality of Layer-Level Entropy-Weighted Quantization Beyond Model Architecture and Size

About

We present a novel approach to selective model quantization that transcends the limitations of architecture-specific and size-dependent compression methods for Large Language Models (LLMs) using Entropy-Weighted Quantization (EWQ). By analyzing the entropy distribution across transformer blocks, EWQ determines which blocks can be safely quantized without causing significant performance degradation, independent of model architecture or size. Our method outperforms uniform quantization approaches, maintaining Massive Multitask Language Understanding (MMLU) accuracy scores within 0.5% of unquantized models while reducing memory usage by up to 18%. We demonstrate the effectiveness of EWQ across multiple architectures -- from 1.6B to 70B parameters -- and showcase consistent improvements in the quality-compression trade-off regardless of model scale or architectural design. A surprising finding of EWQ is its ability to reduce perplexity compared to unquantized models, suggesting the presence of beneficial regularization through selective precision reduction. This improvement holds across different model families, indicating a fundamental relationship between layer-level entropy and optimal precision requirements. Additionally, we introduce FastEWQ, a rapid method for entropy distribution analysis that eliminates the need for loading model weights. This technique leverages universal characteristics of entropy distribution that persist across various architectures and scales, enabling near-instantaneous quantization decisions while maintaining 80% classification accuracy with full entropy analysis. Our results demonstrate that effective quantization strategies can be developed independently of specific architectural choices or model sizes, opening new possibilities for efficient LLM deployment.

Alireza Behtash, Marijan Fofonjka, Ethan Baird, Tyler Mauer, Hossein Moghimifam, David Stout, Joel Dennison• 2025

Related benchmarks

TaskDatasetResultRank
Language ModelingC4
Perplexity8.43
1071
Commonsense ReasoningPIQA
Accuracy72.1
751
Common Sense ReasoningHellaSwag
Accuracy75.11
213
Common Sense ReasoningBoolQ
Accuracy75.69
212
Common Sense ReasoningWinoGrande
Accuracy73.43
189
Language ModelingWikiText2
Perplexity6.77
162
ReasoningPIQA
Accuracy75.38
145
ReasoningARC-C
Accuracy55.03
80
Commonsense ReasoningTruthfulQA
Accuracy26.95
28
Language ModelingLanguage Modeling Average
PPL7.61
12
Showing 10 of 12 rows

Other info

Follow for update