Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Flex Attention: A Programming Model for Generating Optimized Attention Kernels

About

Over the past 7 years, attention has become one of the most important primitives in deep learning. The primary approach to optimize attention is FlashAttention, which fuses the operation together, drastically improving both the runtime and the memory consumption. However, the importance of FlashAttention combined with its monolithic nature poses a problem for researchers aiming to try new attention variants -- a "software lottery". This problem is exacerbated by the difficulty of writing efficient fused attention kernels, resisting traditional compiler-based approaches. We introduce FlexAttention, a novel compiler-driven programming model that allows implementing the majority of attention variants in a few lines of idiomatic PyTorch code. We demonstrate that many existing attention variants (e.g. Alibi, Document Masking, PagedAttention, etc.) can be implemented via FlexAttention, and that we achieve competitive performance compared to these handwritten kernels. Finally, we demonstrate how FlexAttention allows for easy composition of attention variants, solving the combinatorial explosion of attention variants.

Juechu Dong, Boyuan Feng, Driss Guessous, Yanbo Liang, Horace He• 2024

Related benchmarks

TaskDatasetResultRank
Attention Operator ThroughputLlama2 7B (32 Q-heads/32 KV-heads/128 Head-dimension)
Attention TFLOPS167.1
30
Attention Operator ThroughputQwen2.5 72B (64 Q-heads/8 KV-heads/128 Head-dimension)
Attention Throughput (TFLOPS)172.4
29
Attention Operator ThroughputLlama 405B (128 Q-heads/8 KV-heads/128 Head-dimension) 3.1
TFLOPS175.3
28
Masked Multi-Head AttentionT4 GPU Synthetic Performance Benchmark
Performance (TFLOPS)13.45
5
Masked Grouped Query AttentionT4 GPU Synthetic Performance Benchmark
Performance (TFLOPS)15.13
3
Grouped Query AttentionT4 GPU Synthetic Performance Benchmark
Performance (TFLOPS)20.12
2
Showing 6 of 6 rows

Other info

Follow for update