Our new X account is live! Follow @wizwand_team for updates
Home
/
Benchmarks
Machine Translation on Fr-En document-level translation
Loading...
87
d-COMET
Ours
84.3272
85.0211
85.715
86.4089
Apr 8, 2025
d-COMET
Updated 4d ago
Evaluation Results
Method
Method
Links
d-COMET
Ours
Base Model=Mistral-Nem...
2025.04
87
Ours
Base Model=LLaMA-3-8B-...
2025.04
86.69
DocRefine (sent)
Base Model=LLaMA-3-8B-...
2025.04
86.56
Ours (-QA)
Base Model=LLaMA-3-8B-...
2025.04
86.53
Sent2Sent (tuned)
Base Model=LLaMA-3-8B-...
2025.04
86.46
DocRefine (doc)
Base Model=Mistral-Nem...
2025.04
86.41
Ours (-QA)
Base Model=Mistral-Nem...
2025.04
86.4
Doc2Doc (tuned)
Base Model=Mistral-Nem...
2025.04
86.39
Doc2Doc (tuned)
Base Model=LLaMA-3-8B-...
2025.04
86.37
DocRefine (doc)
Base Model=LLaMA-3-8B-...
2025.04
86.32
SentRefine (sent)
Base Model=Mistral-Nem...
2025.04
86.23
DocRefine (sent)
Base Model=Mistral-Nem...
2025.04
86.21
SentRefine (sent)
Base Model=LLaMA-3-8B-...
2025.04
85.98
Doc2Doc
Base Model=Mistral-Nem...
2025.04
85.95
Sent2Sent (tuned)
Base Model=Mistral-Nem...
2025.04
85.79
Doc2Doc
Base Model=LLaMA-3-8B-...
2025.04
85.4
Sent2Sent
Base Model=Mistral-Nem...
2025.04
85.27
Sent2Sent
Base Model=LLaMA-3-8B-...
2025.04
84.43
Feedback
Search any
task
Search any
task