Our new X account is live! Follow @wizwand_team for updates
Home
/
Benchmarks
Code-switched text synthesis on De-En
Loading...
32.65
BLEU
Fine-tuned PMMTM on all language pairs (augment-MMT)
2.1156
10.0428
17.97
25.8972
May 26, 2023
BLEU
METEOR
Updated 4d ago
Evaluation Results
Method
Method
Links
BLEU
METEOR
Fine-tuned PMMTM on all language pairs (augment-MMT)
Type=Supervised
2023.05
32.65
59.96
Fine-tuned PMMTM on all language pairs (mBART50-MMT)
Type=Supervised
2023.05
32.24
59.75
Gupta et al. (2020)
Type=Supervised
2023.05
24.15
30.47
GLOSS (augment-MMT + prefix)
Type=Zero-shot transfer
2023.05
21.88
50.33
GLOSS (mBART50-MMT + prefix)
Type=Zero-shot transfer
2023.05
20.49
48.49
GLOSS (mBART50-MMT + adapter)
Type=Zero-shot transfer
2023.05
18.63
48.28
GLOSS (augment-MMT + adapter)
Type=Zero-shot transfer
2023.05
14.58
40.75
Fine-tuned PMMTM on available language pairs
Type=Zero-shot transfer
2023.05
9.09
32.34
Machine Translation
Type=Unsupervised
2023.05
6.3
30.28
Translate, Align, then Swap
Type=Unsupervised
2023.05
5.53
27.3
Copy Input
Type=Unsupervised
2023.05
3.29
22.76
Feedback
Search any
task
Search any
task