Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy

About

We propose a method to optimize the representation and distinguishability of samples from two probability distributions, by maximizing the estimated power of a statistical test based on the maximum mean discrepancy (MMD). This optimized MMD is applied to the setting of unsupervised learning by generative adversarial networks (GAN), in which a model attempts to generate realistic samples, and a discriminator attempts to tell these apart from data samples. In this context, the MMD may be used in two roles: first, as a discriminator, either directly on the samples, or on features of the samples. Second, the MMD can be used to evaluate the performance of a generative model, by testing the model's samples against a reference data set. In the latter role, the optimized MMD is particularly helpful, as it gives an interpretable indication of how the model and data distributions differ, even in cases where individual model samples are not easily distinguished either by eye or by classifier.

Danica J. Sutherland, Hsiao-Yu Tung, Heiko Strathmann, Soumyajit De, Aaditya Ramdas, Alex Smola, Arthur Gretton• 2016

Related benchmarks

TaskDatasetResultRank
Two-sample testHiggs alpha=0.05 (test)
Test Power100
42
Two-sample testMNIST Real vs DCGAN samples (test)
Test Power89.4
36
Domain AdaptationMNIST to MNIST-M (test)--
24
Domain Adaptation ClassificationSVHN → MNIST (test)
Error Rate0.2848
12
Domain Adaptation ClassificationSynthetic Signs → GTSRB (test)
Error Rate (%)10.69
10
Domain Adaptation ClassificationSynthetic Digits → SVHN (test)
Error Rate (%)19.14
10
Two-sample testCIFAR-10 vs CIFAR-10.1 1.0 (test)
Mean Rejection Rate0.316
6
Showing 7 of 7 rows

Other info

Follow for update