Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multi-Way Representation Alignment

About

The Platonic Representation Hypothesis suggests that independently trained neural networks converge to increasingly similar latent spaces. However, current strategies for mapping these representations are inherently pairwise, scaling quadratically with the number of models and failing to yield a consistent global reference. In this paper, we study the alignment of $M \ge 3$ models. We first adapt Generalized Procrustes Analysis (GPA) to construct a shared orthogonal universe that preserves the internal geometry essential for tasks like model stitching. We then show that strict isometric alignment is suboptimal for retrieval, where agreement-maximizing methods like Canonical Correlation Analysis (CCA) typically prevail. To bridge this gap, we finally propose Geometry-Corrected Procrustes Alignment (GCPA), which establishes a robust GPA-based universe followed by a post-hoc correction for directional mismatch. Extensive experiments demonstrate that GCPA consistently improves any-to-any retrieval while retaining a practical shared reference space.

Akshit Achara, Tatiana Gaintseva, Mateo Mahaut, Pritish Chakraborty, Viktor Stenby Johansson, Melih Barsbey, Emanuele Rodol\`a, Donato Crisostomi• 2026

Related benchmarks

TaskDatasetResultRank
Multimodal alignmentFlickr8K
Delta+ Mean Distance0.503
12
Cross-lingual retrievalTED-MULTI M=3 (test)
Avg Rank-1 Retrieval63.7
4
Cross-lingual retrievalTED-MULTI M=5 (test)
Avg Rank-1 Retrieval55.3
4
Cross-lingual retrievalTED-MULTI M=10 (test)
Avg Rank-1 Retrieval50.3
4
Intent ClusteringMASSIVE (test)
ARI0.3
4
Showing 5 of 5 rows

Other info

Follow for update