Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

T-Modules: Translation Modules for Zero-Shot Cross-Modal Machine Translation

About

We present a new approach to perform zero-shot cross-modal transfer between speech and text for translation tasks. Multilingual speech and text are encoded in a joint fixed-size representation space. Then, we compare different approaches to decode these multimodal and multilingual fixed-size representations, enabling zero-shot translation between languages and modalities. All our models are trained without the need of cross-modal labeled translation data. Despite a fixed-size representation, we achieve very competitive results on several text and speech translation tasks. In particular, we significantly improve the state-of-the-art for zero-shot speech translation on Must-C. Incorporating a speech decoder in our framework, we introduce the first results for zero-shot direct speech-to-speech and text-to-speech translation.

Paul-Ambroise Duquenne, Hongyu Gong, Beno\^it Sagot, Holger Schwenk• 2022

Related benchmarks

TaskDatasetResultRank
Speech TranslationMuST-C (tst-COMMON)
BLEU (De)23.8
20
Showing 1 of 1 rows

Other info

Follow for update