Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Leveraging KV Similarity for Online Structured Pruning in LLMs

About

Pruning has emerged as a promising direction for accelerating large language model (LLM) inference, yet existing approaches often suffer from instability because they rely on offline calibration data that may not generalize across inputs. In this work, we introduce Token Filtering, a lightweight online structured pruning technique that makes pruning decisions directly during inference without any calibration data. The key idea is to measure token redundancy via joint key-value similarity and skip redundant attention computations, thereby reducing inference cost while preserving critical information. To further enhance stability, we design a variance-aware fusion strategy that adaptively weights key and value similarity across heads, ensuring that informative tokens are retained even under high pruning ratios. This design introduces no additional memory overhead and provides a more reliable criterion for token importance. Extensive experiments on LLaMA-2 (7B/13B), LLaMA-3 (8B), and Mistral (7B) demonstrate that Token Filtering consistently outperforms prior structured pruning methods, preserving accuracy on commonsense reasoning benchmarks and maintaining strong performance on challenging tasks such as MMLU, even with 50% pruning.

Jungmin Lee, Gwangeun Byeon, Yulhwa Kim, Seokin Hong• 2025

Related benchmarks

TaskDatasetResultRank
Multi-task Language UnderstandingMMLU--
842
Commonsense ReasoningCommon Sense Reasoning Tasks
Avg Score70.52
241
Language UnderstandingMMLU 0-shot
Accuracy70.46
110
Commonsense ReasoningCommonsense Reasoning
Accuracy70.52
44
Commonsense ReasoningCommonsense Reasoning Suite BoolQ, PIQA, HellaS, WinoG, ARC-e, ARC-c, OBQA
Average Accuracy67.94
37
Text GenerationText Generation
PPL15.52
33
Commonsense ReasoningCommonsense Reasoning Suite (test)
Avg Accuracy0.6582
22
Throughput MeasurementLLaMA-2 13B
Throughput (tokens/s)19.4
20
Commonsense ReasoningCommonsense Reasoning Benchmarks zero-shot LLaMA-2-13B
BoolQ Accuracy (Zero-shot)80.18
17
Language ModelingPerplexity Evaluation zero-shot
PPL (zero-shot)13.37
17
Showing 10 of 15 rows

Other info

Follow for update