Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Max-Sliced Wasserstein Distance and its use for GANs

About

Generative adversarial nets (GANs) and variational auto-encoders have significantly improved our distribution modeling capabilities, showing promise for dataset augmentation, image-to-image translation and feature learning. However, to model high-dimensional distributions, sequential training and stacked architectures are common, increasing the number of tunable hyper-parameters as well as the training time. Nonetheless, the sample complexity of the distance metrics remains one of the factors affecting GAN training. We first show that the recently proposed sliced Wasserstein distance has compelling sample complexity properties when compared to the Wasserstein distance. To further improve the sliced Wasserstein distance we then analyze its `projection complexity' and develop the max-sliced Wasserstein distance which enjoys compelling sample complexity while reducing projection complexity, albeit necessitating a max estimation. We finally illustrate that the proposed distance trains GANs on high-dimensional images up to a resolution of 256x256 easily.

Ishan Deshpande, Yuan-Ting Hu, Ruoyu Sun, Ayis Pyrros, Nasir Siddiqui, Sanmi Koyejo, Zhizhen Zhao, David Forsyth, Alexander Schwing• 2019

Related benchmarks

TaskDatasetResultRank
Image GenerationCIFAR-10 32x32
FID43.33
44
Image GenerationCelebA-64
FID16.79
31
Image GenerationCelebA-HQ 128x128
FID39.75
4
Point-cloud Gradient FlowShapeNet Core-55
W2 Error (Step 0)2.05e+3
4
Point Cloud ReconstructionModelNet40 Epoch 20 (test)
SW22.91
4
Point Cloud ReconstructionModelNet40 Epoch 100 (test)
SW22.24
4
Point Cloud ReconstructionModelNet40 Epoch 200 (test)
SW2 Error2.14
4
Showing 7 of 7 rows

Other info

Follow for update