Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CHESS: Context-aware Hierarchical Efficient Semantic Selection for Long-Context LLM Inference

About

Long-context LLMs demand accurate inference at low latency, yet decoding becomes primarily constrained by KV cache as context grows. Prior pruning methods are largely context-agnostic: their token selection ignores step-wise relevance and local semantics, which undermines quality. Moreover, their irregular accesses and selection overheads yield only limited wall-clock speedups. To address this, we propose \textbf{CHESS}, an \textit{algorithm-system co-design} KV-cache management system. Algorithmically, CHESS introduces a context-aware, hierarchical selection policy that dynamically reconstructs a coherent context for the current decoding. System-wise, coarse granularity selection eliminates expensive data movement, fully realizing practical acceleration from theoretical sparsity. Extensive evaluations demonstrate that CHESS surpasses Full-KV quality using only \textbf{1\%} of the KV cache, delivers low-latency stable inference with up to \textbf{4.56$\times$} higher throughput, and consistently outperforms other strong baselines. Code is available at \href{https://anonymous.4open.science/r/CHESS-9958/}{https://anonymous.4open.science/r/CHESS/}.

Chao Fei, Guozhong Li, Chenxi Liu, Panos Kalnis• 2026

Related benchmarks

TaskDatasetResultRank
Long-context UnderstandingLongBench v2--
37
Long-Context InferenceLongBench <32K tokens v2 (short)
Accuracy41.7
8
Long-Context InferenceLongBench 32K~128K tokens v2 (medium)
Accuracy27.4
8
Long-Context InferenceLongBench >128K tokens v2 (long)
Accuracy33.3
8
Long-Context InferenceLongBench v2 (hard)
Accuracy30.2
8
Showing 5 of 5 rows

Other info

Follow for update