Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning Texture Invariant Representation for Domain Adaptation of Semantic Segmentation

About

Since annotating pixel-level labels for semantic segmentation is laborious, leveraging synthetic data is an attractive solution. However, due to the domain gap between synthetic domain and real domain, it is challenging for a model trained with synthetic data to generalize to real data. In this paper, considering the fundamental difference between the two domains as the texture, we propose a method to adapt to the texture of the target domain. First, we diversity the texture of synthetic images using a style transfer algorithm. The various textures of generated images prevent a segmentation model from overfitting to one specific (synthetic) texture. Then, we fine-tune the model with self-training to get direct supervision of the target texture. Our results achieve state-of-the-art performance and we analyze the properties of the model trained on the stylized dataset with extensive experiments.

Myeongjin Kim, Hyeran Byun• 2020

Related benchmarks

TaskDatasetResultRank
Semantic segmentationGTA5 → Cityscapes (val)
mIoU50.2
533
Semantic segmentationSYNTHIA to Cityscapes (val)
Rider IoU52.6
435
Semantic segmentationCityscapes GTA5 to Cityscapes adaptation (val)
mIoU (Overall)50.2
352
Semantic segmentationSYNTHIA to Cityscapes
Road IoU92.6
150
Semantic segmentationSynthia to Cityscapes (test)
Road IoU92.6
138
Semantic segmentationCityscapes (val)
mIoU50.2
133
Semantic segmentationGTA5 to Cityscapes 1.0 (val)
Road IoU92.9
98
Semantic segmentationGTA to Cityscapes
Road IoU92.9
72
Semantic segmentationCityscapes trained on SYNTHIA (val)
Road IoU92.6
60
Semantic segmentationCityscapes GTA5 source 1.0 (val)
mIoU50.2
49
Showing 10 of 13 rows

Other info

Follow for update