Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Hyperspherical Variational Auto-Encoders

About

The Variational Auto-Encoder (VAE) is one of the most used unsupervised machine learning models. But although the default choice of a Gaussian distribution for both the prior and posterior represents a mathematically convenient distribution often leading to competitive results, we show that this parameterization fails to model data with a latent hyperspherical structure. To address this issue we propose using a von Mises-Fisher (vMF) distribution instead, leading to a hyperspherical latent space. Through a series of experiments we show how such a hyperspherical VAE, or $\mathcal{S}$-VAE, is more suitable for capturing data with a hyperspherical latent structure, while outperforming a normal, $\mathcal{N}$-VAE, in low dimensions on other data types. Code at http://github.com/nicola-decao/s-vae-tf and https://github.com/nicola-decao/s-vae-pytorch

Tim R. Davidson, Luca Falorsi, Nicola De Cao, Thomas Kipf, Jakub M. Tomczak• 2018

Related benchmarks

TaskDatasetResultRank
Link PredictionCiteseer
AUC94.7
146
Link PredictionPubmed
AUC96
123
Link PredictionCora
AUC0.941
116
Link PredictionCora (test)
AUC0.941
69
Link PredictionPubMed (test)
AUC96
65
Link PredictionCiteseer (test)
AUC0.947
31
Semi-supervised classificationMNIST 100 labels
Error Rate0.052
16
Showing 7 of 7 rows

Other info

Code

Follow for update