Share your thoughts, 1 month free Claude Pro on us
See more
Home
/
Benchmarks
Multi-turn conversation on Long-MT-Bench+
Loading...
7.36
Accuracy
Rhea
1.2552
2.8401
4.425
6.0099
Dec 7, 2025
Accuracy
Latency
Updated 1mo ago
Evaluation Results
Method
Method
Links
Accuracy
Latency
Rhea
2025.12
7.36
29.08
BM25(RAG)
2025.12
6.65
10.81
Vanilla
2025.12
6.32
27.29
MemGAS
2025.12
6.07
-
Recent-k
2025.12
5.03
13.89
Reply-Soft-Compress
2025.12
4.55
31.79
LongAlpaca
Base Model=Vicuna-7B
2025.12
2.43
23.73
Memocha
Base Model=Vicuna-7B
2025.12
1.88
11.87
LlmLingua2
2025.12
1.5
29.73
Summary
2025.12
1.49
33.55
Feedback
Search any
task
Search any
task