Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SnapMLA: Efficient Long-Context MLA Decoding via Hardware-Aware FP8 Quantized Pipelining

About

While FP8 attention has shown substantial promise in innovations like FlashAttention-3, its integration into the decoding phase of the DeepSeek Multi-head Latent Attention (MLA) architecture presents notable challenges. These challenges include numerical heterogeneity arising from the decoupling of positional embeddings, misalignment of quantization scales in FP8 PV GEMM, and the need for optimized system-level support. In this paper, we introduce SnapMLA, an FP8 MLA decoding framework optimized to improve long-context efficiency through the following hardware-aware algorithm-kernel co-optimization techniques: (i) RoPE-Aware Per-Token KV Quantization, where the RoPE part is maintained in high precision, motivated by our comprehensive analysis of the heterogeneous quantization sensitivity inherent to the MLA KV cache. Furthermore, per-token granularity is employed to align with the autoregressive decoding process and maintain quantization accuracy. (ii) Quantized PV Computation Pipeline Reconstruction, which resolves the misalignment of quantization scale in FP8 PV computation stemming from the shared KV structure of the MLA KV cache. (iii) End-to-End Dataflow Optimization, where we establish an efficient data read-and-write workflow using specialized kernels, ensuring efficient data flow and performance gains. Extensive experiments on state-of-the-art MLA LLMs show that SnapMLA achieves up to a 1.91x improvement in throughput, with negligible risk of performance degradation in challenging long-context tasks, including mathematical reasoning and code generation benchmarks. Code is available at https://github.com/meituan-longcat/SGLang-FluentLLM.

Yifan Zhang, Zunhai Su, Shuhao Hu, Rui Yang, Wei Wu, Yulei Qian, Yuchen Xie, Xunliang Cai• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningAIME 24
Avg@32 Accuracy93.65
23
AlignmentIFEval strict prompt
pass@187.8
16
General QAMMLU-Redux
Exact Match90.89
7
AlignmentArena Hard
Hard Prompt Gemini Score70.4
4
CodingLiveCodeBench (LCB) 24.08-25.05
Mean@479.74
4
General QAMMLU-Pro
Accuracy84.43
4
General ReasoningGPQA Diamond
Mean@1682.57
4
General ReasoningZebraLogic
Mean@196
4
Mathematical ReasoningAIME 25
Mean@3288.44
4
Mathematical ReasoningBeyondAIME
Mean@1070.2
4
Showing 10 of 11 rows

Other info

Follow for update