Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Geometric Autoencoders -- What You See is What You Decode

About

Visualization is a crucial step in exploratory data analysis. One possible approach is to train an autoencoder with low-dimensional latent space. Large network depth and width can help unfolding the data. However, such expressive networks can achieve low reconstruction error even when the latent representation is distorted. To avoid such misleading visualizations, we propose first a differential geometric perspective on the decoder, leading to insightful diagnostics for an embedding's distortion, and second a new regularizer mitigating such distortion. Our ``Geometric Autoencoder'' avoids stretching the embedding spuriously, so that the visualization captures the data structure more faithfully. It also flags areas where little distortion could not be achieved, thus guarding against misinterpretation.

Philipp Nazari, Sebastian Damrich, Fred A. Hamprecht• 2023

Related benchmarks

TaskDatasetResultRank
Cell Type ClassificationssREAD (evaluation)
Accuracy95.75
20
Manifold Representation LearningSwiss Roll, dSprites, and MNIST combined average across datasets
k-NN Recall9
10
Representation LearningdSprites
k-NN Accuracy49.9
10
Representation LearningMNIST Uniform (test)
k-NN Accuracy70.3
10
Representation LearningMNIST non-uniform
k-NN Accuracy68.2
10
Representation LearningMNIST--
9
Representation LearningCIFAR10
Reciprocity95
7
Representation LearningFMNIST
Reconstruction Error0.12
7
Representation LearningPaul15
Rec94
7
Representation LearningPBMC3k
Rec Score82
7
Showing 10 of 10 rows

Other info

Follow for update