Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Distributionally Robust Classification for Multi-source Unsupervised Domain Adaptation

About

Unsupervised domain adaptation (UDA) is a statistical learning problem when the distribution of training (source) data is different from that of test (target) data. In this setting, one has access to labeled data only from the source domain and unlabeled data from the target domain. The central objective is to leverage the source data and the unlabeled target data to build models that generalize to the target domain. Despite its potential, existing UDA approaches often struggle in practice, particularly in scenarios where the target domain offers only limited unlabeled data or spurious correlations dominate the source domain. To address these challenges, we propose a novel distributionally robust learning framework that models uncertainty in both the covariate distribution and the conditional label distribution. Our approach is motivated by the multi-source domain adaptation setting but is also directly applicable to the single-source scenario, making it versatile in practice. We develop an efficient learning algorithm that can be seamlessly integrated with existing UDA methods. Extensive experiments under various distribution shift scenarios show that our method consistently outperforms strong baselines, especially when target data are extremely scarce.

Seonghwi Kim, Sung Ho Jo, Wooseok Ha, Minwoo Chae• 2026

Related benchmarks

TaskDatasetResultRank
ClassificationCelebA (test)
Average Accuracy85
92
Image ClassificationCMNIST (test)
Test Accuracy7.5
55
Digit ClassificationUSPS -> MNIST
Accuracy97.3
38
Digit ClassificationMNIST to USPS
Accuracy95.6
34
Digit ClassificationSVHN -> MNIST
Accuracy0.944
28
ClassificationWaterbirds (test)
Test Accuracy87.3
15
Showing 6 of 6 rows

Other info

Follow for update