Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Information-theoretic regularization for Multi-source Domain Adaptation

About

Adversarial learning strategy has demonstrated remarkable performance in dealing with single-source Domain Adaptation (DA) problems, and it has recently been applied to Multi-source DA (MDA) problems. Although most existing MDA strategies rely on a multiple domain discriminator setting, its effect on the latent space representations has been poorly understood. Here we adopt an information-theoretic approach to identify and resolve the potential adverse effect of the multiple domain discriminators on MDA: disintegration of domain-discriminative information, limited computational scalability, and a large variance in the gradient of the loss during training. We examine the above issues by situating adversarial DA in the context of information regularization. This also provides a theoretical justification for using a single and unified domain discriminator. Based on this idea, we implement a novel neural architecture called a Multi-source Information-regularized Adaptation Networks (MIAN). Large-scale experiments demonstrate that MIAN, despite its structural simplicity, reliably and significantly outperforms other state-of-the-art methods.

Geon Yeong Park, Sang Wan Lee• 2021

Related benchmarks

TaskDatasetResultRank
Multi-source closed-set UDAOffice-Home target domains Ar, Cl, Pr, Re
Accuracy (Ar)69.9
16
Multi-source closed-set UDAOffice target domains A, D, W
Acc (Target A)76.2
13
Showing 2 of 2 rows

Other info

Follow for update