Our new X account is live! Follow @wizwand_team for updates
Home
/
Benchmarks
Code-switched text synthesis on Hi-En
Loading...
28.88
BLEU
Fine-tuned PMMTM on all language pairs (augment-MMT)
2.932
9.6685
16.405
23.1415
May 26, 2023
BLEU
METEOR
Updated 4d ago
Evaluation Results
Method
Method
Links
BLEU
METEOR
Fine-tuned PMMTM on all language pairs (augment-MMT)
Type=Supervised
2023.05
28.88
55.1
Fine-tuned PMMTM on all language pairs (mBART50-MMT)
Type=Supervised
2023.05
27.93
54.81
Gupta et al. (2020)
Type=Supervised
2023.05
21.55
28.37
GLOSS (augment-MMT + prefix)
Type=Zero-shot transfer
2023.05
12.16
36.94
Machine Translation
Type=Unsupervised
2023.05
9.87
24.26
GLOSS (augment-MMT + adapter)
Type=Zero-shot transfer
2023.05
8.61
30.39
GLOSS (mBART50-MMT + prefix)
Type=Zero-shot transfer
2023.05
7.51
29.82
Translate, Align, then Swap
Type=Unsupervised
2023.05
6.61
24.9
Copy Input
Type=Unsupervised
2023.05
5.22
24.2
GLOSS (mBART50-MMT + adapter)
Type=Zero-shot transfer
2023.05
4.09
22.02
Fine-tuned PMMTM on available language pairs
Type=Zero-shot transfer
2023.05
3.93
22.22
Feedback
Search any
task
Search any
task