Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Bridging the domain gap in cross-lingual document classification

About

The scarcity of labeled training data often prohibits the internationalization of NLP models to multiple languages. Recent developments in cross-lingual understanding (XLU) has made progress in this area, trying to bridge the language barrier using language universal representations. However, even if the language problem was resolved, models trained in one language would not transfer to another language perfectly due to the natural domain drift across languages and cultures. We consider the setting of semi-supervised cross-lingual understanding, where labeled data is available in a source language (English), but only unlabeled data is available in the target language. We combine state-of-the-art cross-lingual methods with recently proposed methods for weakly supervised learning such as unsupervised pre-training and unsupervised data augmentation to simultaneously close both the language gap and the domain gap in XLU. We show that addressing the domain gap is crucial. We improve over strong baselines and achieve a new state-of-the-art for cross-lingual document classification.

Guokun Lai, Barlas Oguz, Yiming Yang, Veselin Stoyanov• 2019

Related benchmarks

TaskDatasetResultRank
News document classificationMLDoc (test)
Error Rate (FR)3.95
9
Sentiment Classificationamazon fr (test)
Error Rate (%)5.95
8
Sentiment Classificationamazon-de (test)
Error Rate5.77
8
Sentiment Classificationamazon-cn (test)
Error Rate7.74
8
Sentiment Classificationdianping (test)
Error Rate0.0464
8
Showing 5 of 5 rows

Other info

Code

Follow for update