Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Hardware-Efficient Attention for Fast Decoding

About

LLM decoding is bottlenecked for large batches and long contexts by loading the key-value (KV) cache from high-bandwidth memory, which inflates per-token latency, while the sequential nature of decoding limits parallelism. We analyze the interplay among arithmetic intensity, parallelization, and model quality and question whether current architectures fully exploit modern hardware. This work redesigns attention to perform more computation per byte loaded from memory to maximize hardware efficiency without trading off parallel scalability. We first propose Grouped-Tied Attention (GTA), a simple variant that combines and reuses key and value states, reducing memory transfers without compromising model quality. We then introduce Grouped Latent Attention (GLA), a parallel-friendly latent attention paired with low-level optimizations for fast decoding while maintaining high model quality. Experiments show that GTA matches Grouped-Query Attention (GQA) quality while using roughly half the KV cache and that GLA matches Multi-head Latent Attention (MLA) and is easier to shard. Our optimized GLA kernel is up to 2$\times$ faster than FlashMLA, for example, in a speculative decoding setting when the query length exceeds one. Furthermore, by fetching a smaller KV cache per device, GLA reduces end-to-end latency and increases throughput in online serving benchmarks by up to 2$\times$.

Ted Zadouri, Hubert Strauss, Tri Dao• 2025

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag--
1891
Language ModelingC4 (val)
PPL16.323
514
Commonsense ReasoningWinoGrande
Accuracy58.48
372
Common Sense ReasoningBoolQ
Accuracy61.96
212
Commonsense ReasoningARC-C--
172
Language ModelingFineWeb (val)--
159
Commonsense ReasoningARC-E
Accuracy68.77
106
Common Sense ReasoningPIQA
Accuracy75.14
71
Commonsense ReasoningOpenBookQA
Accuracy42.6
71
Language ModelingThe Pile (val)
Perplexity (bits/byte)13.225
31
Showing 10 of 15 rows

Other info

Follow for update