Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CoSDA-ML: Multi-Lingual Code-Switching Data Augmentation for Zero-Shot Cross-Lingual NLP

About

Multi-lingual contextualized embeddings, such as multilingual-BERT (mBERT), have shown success in a variety of zero-shot cross-lingual tasks. However, these models are limited by having inconsistent contextualized representations of subwords across different languages. Existing work addresses this issue by bilingual projection and fine-tuning technique. We propose a data augmentation framework to generate multi-lingual code-switching data to fine-tune mBERT, which encourages model to align representations from source and multiple target languages once by mixing their context information. Compared with the existing work, our method does not rely on bilingual sentences for training, and requires only one training process for multiple target languages. Experimental results on five tasks with 19 languages show that our method leads to significantly improved performances for all the tasks compared with mBERT.

Libo Qin, Minheng Ni, Yue Zhang, Wanxiang Che• 2020

Related benchmarks

TaskDatasetResultRank
Intent ClassificationMultilingual SLU EN-ES (test)
Accuracy94.8
6
Intent ClassificationMultilingual SLU EN-TH (test)
Accuracy86.7
6
Intent DetectionMultiATIS++ (test)
Acc (en)95.74
5
Slot FillingMultiATIS++ (test)
Accuracy (de)81.37
5
Spoken Language UnderstandingMultiATIS++ (test)
Accuracy (EN)77.04
5
Showing 5 of 5 rows

Other info

Follow for update