Share your thoughts, 1 month free Claude Pro on us
See more
Home
/
Benchmarks
LLM Inference on Long-Context LLM Inference Decode
Loading...
0.13
Latency (ms)
Reuse
-0.2716
2.4392
5.15
7.8608
Dec 18, 2025
Latency (ms)
Ratio w.r.t. TL
Attn Speedup (FA3)
Attn Speedup (TL)
Updated 1mo ago
Evaluation Results
Method
Method
Links
Latency (ms)
Ratio w.r.t. TL
Attn Speedup (FA3)
Attn Speedup (TL)
Reuse
Seqlen=8192, Topk%=10%...
2025.12
0.13
0.18
-
-
Kascade
Seqlen=8192, Topk%=10%...
2025.12
0.24
-
2.91
2.95
FA3
Seqlen=8192, Topk%=10%...
2025.12
0.7
-
-
-
Tilelang (TL)
Seqlen=8192, Topk%=10%...
2025.12
0.71
-
-
-
Anchor
Seqlen=8192, Topk%=10%...
2025.12
0.82
1.15
-
-
Anchor layer 0
Seqlen=8192, Topk%=10%...
2025.12
0.92
1.3
-
-
Kascade
Seqlen=524288, Topk%=1...
2025.12
5.33
-
4.1
4.08
Kascade
Seqlen=524288, Topk%=3...
2025.12
10.17
-
2.15
2.14
Feedback
Search any
task
Search any
task