Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CLUDA : Contrastive Learning in Unsupervised Domain Adaptation for Semantic Segmentation

About

In this work, we propose CLUDA, a simple, yet novel method for performing unsupervised domain adaptation (UDA) for semantic segmentation by incorporating contrastive losses into a student-teacher learning paradigm, that makes use of pseudo-labels generated from the target domain by the teacher network. More specifically, we extract a multi-level fused-feature map from the encoder, and apply contrastive loss across different classes and different domains, via source-target mixing of images. We consistently improve performance on various feature encoder architectures and for different domain adaptation datasets in semantic segmentation. Furthermore, we introduce a learned-weighted contrastive loss to improve upon on a state-of-the-art multi-resolution training approach in UDA. We produce state-of-the-art results on GTA $\rightarrow$ Cityscapes (74.4 mIOU, +0.6) and Synthia $\rightarrow$ Cityscapes (67.2 mIOU, +1.4) datasets. CLUDA effectively demonstrates contrastive learning in UDA as a generic method, which can be easily integrated into any existing UDA for semantic segmentation tasks. Please refer to the supplementary material for the details on implementation.

Midhun Vayyat, Jaswin Kasi, Anuraag Bhattacharya, Shuaib Ahmed, Rahul Tallamraju• 2022

Related benchmarks

TaskDatasetResultRank
Semantic segmentationSYNTHIA to Cityscapes (val)
Rider IoU57.1
435
Semantic segmentationGTA5 to Cityscapes 1.0 (val)
Road IoU97.5
98
Remaining Useful Life predictionC-MAPSS
RMSE (F1->F2)44.36
10
Showing 3 of 3 rows

Other info

Code

Follow for update