Confidence Regularized Self-Training
About
Recent advances in domain adaptation show that deep self-training presents a powerful means for unsupervised domain adaptation. These methods often involve an iterative process of predicting on target domain and then taking the confident predictions as pseudo-labels for retraining. However, since pseudo-labels can be noisy, self-training can put overconfident label belief on wrong classes, leading to deviated solutions with propagated errors. To address the problem, we propose a confidence regularized self-training (CRST) framework, formulated as regularized self-training. Our method treats pseudo-labels as continuous latent variables jointly optimized via alternating optimization. We propose two types of confidence regularization: label regularization (LR) and model regularization (MR). CRST-LR generates soft pseudo-labels while CRST-MR encourages the smoothness on network output. Extensive experiments on image classification and semantic segmentation show that CRSTs outperform their non-regularized counterpart with state-of-the-art performance. The code and models of this work are available at https://github.com/yzou2/CRST.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Semantic segmentation | GTA5 → Cityscapes (val) | mIoU49.8 | 533 | |
| Semantic segmentation | SYNTHIA to Cityscapes (val) | Rider IoU82.8 | 435 | |
| Semantic segmentation | Cityscapes GTA5 to Cityscapes adaptation (val) | mIoU (Overall)47.1 | 352 | |
| Image Classification | Office-31 | Average Accuracy86.8 | 261 | |
| Semantic segmentation | GTA5 to Cityscapes (test) | mIoU47.1 | 151 | |
| Semantic segmentation | SYNTHIA to Cityscapes | Road IoU69.6 | 150 | |
| Semantic segmentation | Synthia to Cityscapes (test) | Road IoU67.7 | 138 | |
| Semantic segmentation | Cityscapes (val) | mIoU48.5 | 133 | |
| Semantic segmentation | Cityscapes adaptation from Synthia 1.0 (val) | Person IoU60.8 | 114 | |
| Domain Adaptation | VisDA 2017 (test) | Mean Class Accuracy78.1 | 98 |