Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Latent Space Translation via Semantic Alignment

About

While different neural models often exhibit latent spaces that are alike when exposed to semantically related data, this intrinsic similarity is not always immediately discernible. Towards a better understanding of this phenomenon, our work shows how representations learned from these neural modules can be translated between different pre-trained networks via simpler transformations than previously thought. An advantage of this approach is the ability to estimate these transformations using standard, well-understood algebraic procedures that have closed-form solutions. Our method directly estimates a transformation between two given latent spaces, thereby enabling effective stitching of encoders and decoders without additional training. We extensively validate the adaptability of this translation procedure in different experimental settings: across various trainings, domains, architectures (e.g., ResNet, CNN, ViT), and in multiple downstream tasks (classification, reconstruction). Notably, we show how it is possible to zero-shot stitch text encoders and vision decoders, or vice-versa, yielding surprisingly good classification performance in this multimodal setting.

Valentino Maiorca, Luca Moschella, Antonio Norelli, Marco Fumero, Francesco Locatello, Emanuele Rodol\`a• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationMNIST
Accuracy94
395
Image ClassificationFashion MNIST
Accuracy86
225
Text ClassificationTREC
Accuracy82
179
Image ClassificationCIFAR10
Accuracy93
125
Text ClassificationAGNews
Accuracy67
119
Text ClassificationDBpedia (DBP)
Accuracy66
110
Word Embedding RetrievalWord Embeddings (test)
MRR0.97
80
Sentiment ClassificationIMDB
Accuracy59
41
RetrievalCaltech-UCSD Birds-200 (CUB) 2011 (test)
MRR0.33
21
3D-Text MatchingObjaverse-LVIS 1.0 (test)
CLIP Matching Accuracy18.4
15
Showing 10 of 17 rows

Other info

Code

Follow for update