Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MultiFiT: Efficient Multi-lingual Language Model Fine-tuning

About

Pretrained language models are promising particularly for low-resource languages as they only require unlabelled data. However, training existing models requires huge amounts of compute, while pretrained cross-lingual models often underperform on low-resource languages. We propose Multi-lingual language model Fine-Tuning (MultiFiT) to enable practitioners to train and fine-tune language models efficiently in their own language. In addition, we propose a zero-shot method using an existing pretrained cross-lingual model. We evaluate our methods on two widely used cross-lingual classification datasets where they outperform models pretrained on orders of magnitude more data and compute. We release all models and code.

Julian Martin Eisenschlos, Sebastian Ruder, Piotr Czapla, Marcin Kardas, Sylvain Gugger, Jeremy Howard• 2019

Related benchmarks

TaskDatasetResultRank
Document ClassificationMLDoc 1.0 (test)
Accuracy (DE)0.959
12
Cross-lingual Document ClassificationMLDoc (test)
Accuracy (EN->FR)89.4
8
Sentiment ClassificationCLS (test)
Accuracy (DE Books)93.19
8
Cross-lingual classificationWebis-CLS-10 (test)--
7
Showing 4 of 4 rows

Other info

Code

Follow for update