Domain Generalization via Invariant Feature Representation
About
This paper investigates domain generalization: How to take knowledge acquired from an arbitrary number of related domains and apply it to previously unseen domains? We propose Domain-Invariant Component Analysis (DICA), a kernel-based optimization algorithm that learns an invariant transformation by minimizing the dissimilarity across domains, whilst preserving the functional relationship between input and output variables. A learning-theoretic analysis shows that reducing dissimilarity improves the expected generalization ability of classifiers on new domains, motivating the proposed algorithm. Experimental results on synthetic and real-world datasets demonstrate that DICA successfully learns invariant features and improves classifier performance in practice.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | PACS (test) | Average Accuracy83.2 | 254 | |
| Domain Generalization | VLCS | Accuracy78.3 | 238 | |
| Image Classification | PACS | Overall Average Accuracy68 | 230 | |
| Domain Generalization | PACS (test) | Average Accuracy50.27 | 225 | |
| Domain Generalization | PACS | Accuracy (Art)64.6 | 221 | |
| Multi-class classification | VLCS | Acc (Caltech)98.3 | 139 | |
| object recognition | PACS (leave-one-domain-out) | -- | 112 | |
| Image Classification | PACS v1 (test) | Average Accuracy68 | 92 | |
| Multi-class classification | PACS (test) | Accuracy (Art Painting)64.57 | 76 | |
| object recognition | VLCS | Average Accuracy65.7 | 31 |