Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Universal Latent Homeomorphic Manifolds: A Framework for Cross-Domain Representation Unification

About

We present the Universal Latent Homeomorphic Manifold (ULHM), a framework that unifies semantic representations (e.g., human descriptions, diagnostic labels) and observation-driven machine representations (e.g., pixel intensities, sensor readings) into a single latent structure. Despite originating from fundamentally different pathways, both modalities capture the same underlying reality. We establish \emph{homeomorphism}, a continuous bijection preserving topological structure, as the mathematical criterion for determining when latent manifolds induced by different semantic-observation pairs can be rigorously unified. This criterion provides theoretical guarantees for three critical applications: (1) semantic-guided sparse recovery from incomplete observations, (2) cross-domain transfer learning with verified structural compatibility, and (3) zero-shot compositional learning via valid transfer from semantic to observation space. Our framework learns continuous manifold-to-manifold transformations through conditional variational inference, avoiding brittle point-to-point mappings. We develop practical verification algorithms, including trust, continuity, and Wasserstein distance metrics, that empirically validate homeomorphic structure from finite samples. Experiments demonstrate: (1) sparse image recovery from 5% of CelebA pixels and MNIST digit reconstruction at multiple sparsity levels, (2) cross-domain classifier transfer achieving 86.73% accuracy from MNIST to Fashion-MNIST without retraining, and (3) zero-shot classification on unseen classes achieving 78.76% on CIFAR-10. Critically, the homeomorphism criterion determines when different semantic-observation pairs share compatible latent structure, enabling principled unification into universal representations and providing a mathematical foundation for decomposing general foundation models into domain-specific components.

Tong Wu, Tayab Uddin Wara, Daniel Hernandez, Sidong Lei• 2026

Related benchmarks

TaskDatasetResultRank
Image ClassificationFashion MNIST (test)
Accuracy86.73
568
Sparse RecoveryCelebA (test)
MSE0.0247
40
Image ReconstructionMNIST
MSE0.021
24
ClassificationMNIST (test)
Accuracy96.97
14
Image ClassificationMNIST (unseen classes 5-9)
Accuracy89.47
9
Image ClassificationFashion-MNIST unseen classes 5-9
Accuracy84.7
9
Image ClassificationCIFAR-10 (unseen classes 5-9)
Accuracy78.76
9
Showing 7 of 7 rows

Other info

Follow for update