Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration

About

The transformer architecture predominates across various models. As the heart of the transformer, attention has a computational complexity of $O(N^2)$, compared to $O(N)$ for linear transformations. When handling large sequence lengths, attention becomes the primary time-consuming component. Although quantization has proven to be an effective method for accelerating model inference, existing quantization methods primarily focus on optimizing the linear layer. In response, we first analyze the feasibility of quantization in attention detailedly. Following that, we propose SageAttention, a highly efficient and accurate quantization method for attention. The OPS (operations per second) of our approach outperforms FlashAttention2 and xformers by about 2.1 times and 2.7 times, respectively. SageAttention also achieves superior accuracy performance over FlashAttention3. Comprehensive experiments confirm that our approach incurs almost no end-to-end metrics loss across diverse models, including those for large language processing, image generation, and video generation. The codes are available at https://github.com/thu-ml/SageAttention.

Jintao Zhang, Jia Wei, Haofeng Huang, Pengle Zhang, Jun Zhu, Jianfei Chen• 2024

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1K
Top-1 Acc82.98
1239
Semantic segmentationADE20K--
1024
Class-conditional Image GenerationImageNet 256x256
Inception Score (IS)278
815
Object DetectionCOCO 2017
AP (Box)50.2
321
Instance SegmentationCOCO 2017
APm43.48
226
Sign Language Video GenerationWan Sign Language Video Generation v2.1 (test)
Latency (s)141
12
Sign Language Video GenerationWLASL
FVD501
6
Showing 7 of 7 rows

Other info

Follow for update