Large-Scale Optimal Transport and Mapping Estimation
About
This paper presents a novel two-step approach for the fundamental problem of learning an optimal map from one distribution to another. First, we learn an optimal transport (OT) plan, which can be thought as a one-to-many map between the two distributions. To that end, we propose a stochastic dual approach of regularized OT, and show empirically that it scales better than a recent related approach when the amount of samples is very large. Second, we estimate a \textit{Monge map} as a deep neural network learned by approximating the barycentric projection of the previously-obtained OT plan. This parameterization allows generalization of the mapping outside the support of the input measure. We prove two theoretical stability results of regularized OT which show that our estimations converge to the OT plan and Monge map between the underlying continuous measures. We showcase our proposed approach on two applications: domain adaptation and generative modeling.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Target Distribution Fitting | High-dimensional Gaussian | BW2^2-UVP182 | 28 | |
| Super-Resolution | CelebA | FID190.1 | 24 | |
| Identity | CelebA | FID188.3 | 14 | |
| EOT plan recovery | Gaussian Dim 2 | BW2-UVP677 | 7 | |
| EOT plan recovery | Gaussian Dim 16 | BW2-UVP1.46e+3 | 7 | |
| EOT plan recovery | Gaussian Dim 64 | BW2-UVP2.56e+3 | 7 | |
| EOT plan recovery | Gaussian Dim 128 | BW2-UVP4.71e+3 | 7 | |
| Marginal Distribution Recovery | 16D Gaussian (test) | BW2-UVP (t=0)0.00e+0 | 7 |