Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Rethinking Domain Generalization Baselines

About

Despite being very powerful in standard learning settings, deep learning models can be extremely brittle when deployed in scenarios different from those on which they were trained. Domain generalization methods investigate this problem and data augmentation strategies have shown to be helpful tools to increase data variability, supporting model robustness across domains. In our work we focus on style transfer data augmentation and we present how it can be implemented with a simple and inexpensive strategy to improve generalization. Moreover, we analyze the behavior of current state of the art domain generalization methods when integrated with this augmentation solution: our thorough experimental evaluation shows that their original effect almost always disappears with respect to the augmented baseline. This issue open new scenarios for domain generalization research, highlighting the need of novel methods properly able to take advantage of the introduced data variability.

Francesco Cappio Borlino, Antonio D'Innocente, Tatiana Tommasi• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationOffice-Home (test)
Mean Accuracy64.75
199
Image ClassificationOffice-Home
Average Accuracy64.75
142
Multi-class classificationVLCS
Acc (Caltech)97.49
139
object recognitionPACS (leave-one-domain-out)
Acc (Art painting)82.73
112
Multi-class classificationPACS (test)
Accuracy (Art Painting)82.73
76
Image ClassificationVLCS (test)
Average Accuracy72.31
65
Showing 6 of 6 rows

Other info

Follow for update