Multi-Adversarial Domain Adaptation
About
Recent advances in deep domain adaptation reveal that adversarial learning can be embedded into deep networks to learn transferable features that reduce distribution discrepancy between the source and target domains. Existing domain adversarial adaptation methods based on single domain discriminator only align the source and target data distributions without exploiting the complex multimode structures. In this paper, we present a multi-adversarial domain adaptation (MADA) approach, which captures multimode structures to enable fine-grained alignment of different data distributions based on multiple domain discriminators. The adaptation can be achieved by stochastic gradient descent with the gradients computed by back-propagation in linear-time. Empirical evidence demonstrates that the proposed model outperforms state of the art methods on standard domain adaptation datasets.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | Office-31 | Average Accuracy85.2 | 261 | |
| Domain Adaptation | Office-31 unsupervised adaptation standard | Accuracy (A to W)90 | 162 | |
| Domain Adaptation | Office-31 | Accuracy (A -> W)90 | 156 | |
| Action Segmentation | 50Salads | Edit Distance72.4 | 114 | |
| Unsupervised Domain Adaptation | ImageCLEF-DA | Average Accuracy85.8 | 104 | |
| Temporal action segmentation | Breakfast | -- | 96 | |
| Domain Adaptation | Image-CLEF DA (test) | Average Accuracy85.8 | 76 | |
| Image Classification | ImageCLEF-DA | Accuracy (I -> P)75 | 37 | |
| Unsupervised Domain Adaptation | Office-31 (full) | Average Accuracy85.2 | 36 | |
| Domain Adaptation Classification | Office-31 (test) | A -> W Accuracy90 | 31 |