Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Cross-lingual Matryoshka Representation Learning across Speech and Text

About

Speakers of under-represented languages face both a language barrier, as most online knowledge is in a few dominant languages, and a modality barrier, since information is largely text-based while many languages are primarily oral. We address this for French-Wolof by training the first bilingual speech-text Matryoshka embedding model, enabling efficient retrieval of French text from Wolof speech queries without relying on a costly ASR-translation pipelines. We introduce large-scale data curation pipelines and new benchmarks, compare modeling strategies, and show that modality fusion within a frozen text Matryoshka model performs best. Although trained only for retrieval, the model generalizes well to other tasks, such as speech intent detection, indicating the learning of general semantic representations. Finally, we analyze cost-accuracy trade-offs across Matryoshka dimensions and ranks, showing that information is concentrated only in a few components, suggesting potential for efficiency improvements.

Yaya Sy, Dioula Doucour\'e, Christophe Cerisara, Irina Illina• 2026

Related benchmarks

TaskDatasetResultRank
Document RetrievalKallaama Retrieval-Eval
nDCG@569.85
17
Document RetrievalFleurs Retrieval-Eval (test)
nDCG@557.89
13
Showing 2 of 2 rows

Other info

Follow for update