Multi-Source Domain Adaptation with Mixture of Experts
About
We propose a mixture-of-experts approach for unsupervised domain adaptation from multiple sources. The key idea is to explicitly capture the relationship between a target example and different source domains. This relationship, expressed by a point-to-set metric, determines how to combine predictors trained on various domains. The metric is learned in an unsupervised fashion using meta-training. Experimental results on sentiment analysis and part-of-speech tagging demonstrate that our approach consistently outperforms multiple baselines and can robustly handle negative transfer.
Jiang Guo, Darsh J Shah, Regina Barzilay• 2018
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Natural Language Inference | XNLI (test) | Average Accuracy62.18 | 167 | |
| Named Entity Recognition | WikiAnn (test) | Average Accuracy68.6 | 58 | |
| Review Rating Classification | Amazon Reviews en, es, fr | Accuracy (de)50.94 | 6 | |
| Review Rating Classification | Amazon Reviews en ja zh | Acc (de)0.4969 | 6 |
Showing 4 of 4 rows