Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Reducing Domain Gap by Reducing Style Bias

About

Convolutional Neural Networks (CNNs) often fail to maintain their performance when they confront new test domains, which is known as the problem of domain shift. Recent studies suggest that one of the main causes of this problem is CNNs' strong inductive bias towards image styles (i.e. textures) which are sensitive to domain changes, rather than contents (i.e. shapes). Inspired by this, we propose to reduce the intrinsic style bias of CNNs to close the gap between domains. Our Style-Agnostic Networks (SagNets) disentangle style encodings from class categories to prevent style biased predictions and focus more on the contents. Extensive experiments show that our method effectively reduces the style bias and makes the model more robust under domain shift. It achieves remarkable performance improvements in a wide range of cross-domain tasks including domain generalization, unsupervised domain adaptation, and semi-supervised domain adaptation on multiple datasets.

Hyeonseob Nam, HyunJae Lee, Jongchan Park, Wonjun Yoon, Donggeun Yoo• 2019

Related benchmarks

TaskDatasetResultRank
Image ClassificationPACS (test)
Average Accuracy82.3
254
Domain GeneralizationVLCS
Accuracy77.8
238
Image ClassificationPACS
Overall Average Accuracy83.2
230
Domain GeneralizationPACS (test)
Average Accuracy83.25
225
Domain GeneralizationPACS
Accuracy (Art)87.4
221
Domain GeneralizationOfficeHome
Accuracy68.1
182
Image ClassificationDomainNet
Accuracy (ClipArt)57.7
161
Domain GeneralizationPACS (leave-one-domain-out)
Art Accuracy87.4
146
Image ClassificationOffice-Home
Average Accuracy62.34
142
Multi-class classificationVLCS
Acc (Caltech)97.3
139
Showing 10 of 67 rows

Other info

Code

Follow for update