Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SemantiCache: Efficient KV Cache Compression via Semantic Chunking and Clustered Merging

About

Existing KV cache compression methods generally operate on discrete tokens or non-semantic chunks. However, such approaches often lead to semantic fragmentation, where linguistically coherent units are disrupted, causing irreversible information loss and degradation in model performance. To address this, we introduce SemantiCache, a novel compression framework that preserves semantic integrity by aligning the compression process with the semantic hierarchical nature of language. Specifically, we first partition the cache into semantically coherent chunks by delimiters, which are natural semantic boundaries. Within each chunk, we introduce a computationally efficient Greedy Seed-Based Clustering (GSC) algorithm to group tokens into semantic clusters. These clusters are further merged into semantic cores, enhanced by a Proportional Attention mechanism that rebalances the reduced attention contributions of the merged tokens. Extensive experiments across diverse benchmarks and models demonstrate that SemantiCache accelerates the decoding stage of inference by up to 2.61 times and substantially reduces memory footprint, while maintaining performance comparable to the original model.

Shunlong Wu, Hai Lin, Shaoshen Chen, Tingwei Lu, Yongqin Zeng, Shaoxiong Zhan, Hai-Tao Zheng, Hong-Gee Kim• 2026

Related benchmarks

TaskDatasetResultRank
Long-context language modelingLongBench
Average Score40.87
164
Key Information RetrievalNeedle-in-a-Haystack 32K context
Accuracy91.15
19
RetrievalNeedle-in-a-Haystack L=8k
Accuracy94.38
18
Inference Efficiency32k context length efficiency Llama-3-8B (test)
Time To First Token (s)4.25
7
Showing 4 of 4 rows

Other info

Follow for update