Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

QUOKA: Query-Oriented KV Selection For Efficient LLM Prefill

About

We present QUOKA: Query-oriented KV selection for efficient attention, a training-free and hardware agnostic sparse attention algorithm for accelerating transformer inference under chunked prefill. While many queries focus on a smaller group of keys in the attention operator, we observe that queries with low cosine similarity with respect to the mean query interact more strongly with more keys and have the greatest contribution to final attention logits. By prioritizing these low cosine similarity queries, the behavior of full attention during the prefill stage can be closely approximated. QUOKA leverages this observation, accelerating attention by (1) first retaining a small set of representative queries and (2) then subselectin the keys most aligned with those queries. Through experiments on Needle-In-A-Haystack, LongBench, RULER, and Math500, we show that, while realizing a 3x reduction in time-to-first-token, 5x speedup in attention on Nvidia GPUs and up to nearly a 7x speedup on Intel Xeon CPUs, QUOKA achieves near-baseline accuracy, utilizing 88% fewer key-value pairs per attention evaluation.

Dalton Jones, Junyoung Park, Matthew Morse, Mingu Lee, Chris Lott, Harper Langston• 2026

Related benchmarks

TaskDatasetResultRank
Long-context UnderstandingLongBench
Accuracy103
60
Long-context evaluationRULER 16k
Total Score88.57
59
Long-context evaluationRULER 32k
Overall Score74.83
41
Long-context evaluationRULER 4k
Score93.73
35
Long-context evaluationRULER 8k
Score91.07
35
Mathematical ReasoningMATH 500
Flex Match91.3
27
Long-context capability evaluationRULER 8192 length
Accuracy92.77
12
Long-context capability evaluationRULER 16384 length
Accuracy91.9
12
Long-context capability evaluationRULER 32768 length
Accuracy91.08
12
Long-context capability evaluationRULER 4096 length
Accuracy93.25
12
Showing 10 of 10 rows

Other info

Follow for update