Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Bi-Lipschitz Autoencoder With Injectivity Guarantee

About

Autoencoders are widely used for dimensionality reduction, based on the assumption that high-dimensional data lies on low-dimensional manifolds. Regularized autoencoders aim to preserve manifold geometry during dimensionality reduction, but existing approaches often suffer from non-injective mappings and overly rigid constraints that limit their effectiveness and robustness. In this work, we identify encoder non-injectivity as a core bottleneck that leads to poor convergence and distorted latent representations. To ensure robustness across data distributions, we formalize the concept of admissible regularization and provide sufficient conditions for its satisfaction. In this work, we propose the Bi-Lipschitz Autoencoder (BLAE), which introduces two key innovations: (1) an injective regularization scheme based on a separation criterion to eliminate pathological local minima, and (2) a bi-Lipschitz relaxation that preserves geometry and exhibits robustness to data distribution drift. Empirical results on diverse datasets show that BLAE consistently outperforms existing methods in preserving manifold structure while remaining resilient to sampling sparsity and distribution shifts. Code is available at https://github.com/qipengz/BLAE.

Qipeng Zhan, Zhuoping Zhou, Zexuan Wang, Qi Long, Li Shen• 2026

Related benchmarks

TaskDatasetResultRank
Cell Type ClassificationssREAD (evaluation)
Accuracy96.26
20
Representation LearningMNIST Uniform (test)
k-NN Accuracy90.3
10
Representation LearningdSprites
k-NN Accuracy73.9
10
Representation LearningMNIST non-uniform
k-NN Accuracy86.5
10
Manifold Representation LearningSwiss Roll, dSprites, and MNIST combined average across datasets
k-NN Recall1.8
10
Showing 5 of 5 rows

Other info

Follow for update