StructKV: Preserving the Structural Skeleton for Scalable Long-Context Inference
About
As Large Language Models (LLMs) scale to support context windows exceeding one million tokens, the linear growth of Key-Value (KV) cache imposes severe memory capacity and bandwidth bottlenecks, constraining the efficiency of long-context inference. Existing compression approaches typically prioritize tokens based on local saliency metrics to decouple prefill computation from decoding memory. However, these methods often rely on local saliency snapshots at a specific layer, thereby systematically discarding tokens that act as global information hubs across the network depth but appear temporarily dormant at the specific layer selected for pruning. To address this limitation, we propose StructKV, a structure-aware KV cache compression framework that introduces three core innovations: First, Global In-Degree Centrality aggregates attention patterns across the network depth to identify global information hubs. Second, Dynamic Pivot Detection utilizes information-theoretic metrics to adaptively locate the optimal layer for compression. Finally, Structural Propagation and Decoupling separates the computational budget from the memory storage budget. Experimental results on the LongBench and RULER benchmarks demonstrate that StructKV effectively preserves long-range dependencies and retrieval robustness.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Long-context Language Understanding | LongBench | Average Score52.44 | 86 | |
| Long-context Language Understanding | LongBench 1.0 (test) | MultiNews25.67 | 61 | |
| Long-context retrieval | RULER | Retrieval Accuracy (8K)81.3 | 34 | |
| Long-context language modeling | LongBench (test) | Qasper Score50.15 | 29 | |
| Long-context Language Understanding | LongBench Ministral-8B-Instruct | NrtvQA30.21 | 14 | |
| Long-context Understanding | LongBench LLaMA-3.1-8B-Instruct (test) | NrtvQA32.8 | 14 |