Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Self-Challenging Improves Cross-Domain Generalization

About

Convolutional Neural Networks (CNN) conduct image classification by activating dominant features that correlated with labels. When the training and testing data are under similar distributions, their dominant features are similar, which usually facilitates decent performance on the testing data. The performance is nonetheless unmet when tested on samples from different distributions, leading to the challenges in cross-domain image classification. We introduce a simple training heuristic, Representation Self-Challenging (RSC), that significantly improves the generalization of CNN to the out-of-domain data. RSC iteratively challenges (discards) the dominant features activated on the training data, and forces the network to activate remaining features that correlates with labels. This process appears to activate feature representations applicable to out-of-domain data without prior knowledge of new domain and without learning extra network parameters. We present theoretical properties and conditions of RSC for improving cross-domain generalization. The experiments endorse the simple, effective and architecture-agnostic nature of our RSC method.

Zeyi Huang, Haohan Wang, Eric P. Xing, Dong Huang• 2020

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-Sketch
Top-1 Accuracy16.1
407
Image ClassificationPACS (test)
Average Accuracy82.1
271
Image ClassificationPACS
Overall Average Accuracy85.15
241
Domain GeneralizationVLCS
Accuracy77.1
238
Domain GeneralizationPACS
Accuracy87.83
231
Domain GeneralizationPACS (test)
Average Accuracy62.6
225
Domain GeneralizationOfficeHome
Accuracy65.5
202
Image ClassificationOffice-Home (test)
Mean Accuracy63.12
199
Image ClassificationImageNet-Sketch (test)
Top-1 Acc0.266
153
Domain GeneralizationPACS (leave-one-domain-out)
Art Accuracy87.89
152
Showing 10 of 97 rows
...

Other info

Follow for update