Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Streaming Attention Approximation via Discrepancy Theory

About

Large language models (LLMs) have achieved impressive success, but their high memory requirements present challenges for long-context token generation. In this paper we study the streaming complexity of attention approximation, a key computational primitive underlying token generation. Our main contribution is BalanceKV, a streaming algorithm for $\epsilon$-approximating attention computations based on geometric process for selecting a balanced collection of Key and Value tokens as per Banaszczyk's vector balancing theory. We complement our algorithm with space lower bounds for streaming attention computation. Besides strong theoretical guarantees, BalanceKV exhibits empirically validated performance improvements over existing methods, both for attention approximation and end-to-end performance on various long context benchmarks.

Insu Han, Michael Kapralov, Ekaterina Kochetkova, Kshiteej Sheth, Amir Zandieh• 2025

Related benchmarks

TaskDatasetResultRank
Long-context Language UnderstandingLongBench-e
LCC69.16
9
Showing 1 of 1 rows

Other info

Follow for update