Share your thoughts, 1 month free Claude Pro on us
See more
Home
/
Benchmarks
Language Modeling Inference on Qwen2.5-7B (256K Context Length)
Loading...
26.3
Decode Latency (ms/token)
FastMKA
23.848
40.399
56.95
73.501
Mar 21, 2026
Decode Latency (ms/token)
Speedup vs MLA
Updated 25d ago
Evaluation Results
Method
Method
Links
Decode Latency (ms/token)
Speedup vs MLA
FastMKA
Batch size=1, Precisio...
2026.03
26.3
1.86
MLA
Batch size=1, Precisio...
2026.03
48.9
-
GQA
Batch size=1, Precisio...
2026.03
75.2
-
MHA
Batch size=1, Precisio...
2026.03
87.6
-
Feedback
Search any
task
Search any
task