Our new X account is live! Follow @wizwand_team for updates
Home
/
Benchmarks
Code-switched text synthesis on Es-En
Loading...
38.59
BLEU
Fine-tuned PMMTM on all language pairs (augment-MMT)
1.8676
11.4013
20.935
30.4687
May 26, 2023
BLEU
METEOR
Updated 4d ago
Evaluation Results
Method
Method
Links
BLEU
METEOR
Fine-tuned PMMTM on all language pairs (augment-MMT)
Type=Supervised
2023.05
38.59
63.36
Fine-tuned PMMTM on all language pairs (mBART50-MMT)
Type=Supervised
2023.05
37.82
62.54
GLOSS (augment-MMT + prefix)
Type=Zero-shot transfer
2023.05
24.85
51.88
GLOSS (mBART50-MMT + prefix)
Type=Zero-shot transfer
2023.05
23.47
50.52
GLOSS (mBART50-MMT + adapter)
Type=Zero-shot transfer
2023.05
23.04
49.75
Gupta et al. (2020)
Type=Supervised
2023.05
22.47
29.45
GLOSS (augment-MMT + adapter)
Type=Zero-shot transfer
2023.05
16.62
42.31
Machine Translation
Type=Unsupervised
2023.05
9.63
32.97
Fine-tuned PMMTM on available language pairs
Type=Zero-shot transfer
2023.05
8.77
30.41
Translate, Align, then Swap
Type=Unsupervised
2023.05
7.8
30.11
Copy Input
Type=Unsupervised
2023.05
3.28
22.31
Feedback
Search any
task
Search any
task