Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Sample, Translate, Recombine: Leveraging Audio Alignments for Data Augmentation in End-to-end Speech Translation

About

End-to-end speech translation relies on data that pair source-language speech inputs with corresponding translations into a target language. Such data are notoriously scarce, making synthetic data augmentation by back-translation or knowledge distillation a necessary ingredient of end-to-end training. In this paper, we present a novel approach to data augmentation that leverages audio alignments, linguistic properties, and translation. First, we augment a transcription by sampling from a suffix memory that stores text and audio data. Second, we translate the augmented transcript. Finally, we recombine concatenated audio segments and the generated translation. Besides training an MT-system, we only use basic off-the-shelf components without fine-tuning. While having similar resource demands as knowledge distillation, adding our method delivers consistent improvements of up to 0.9 and 1.1 BLEU points on five language pairs on CoVoST 2 and on two language pairs on Europarl-ST, respectively.

Tsz Kin Lam, Shigehiko Schamoni, Stefan Riezler• 2022

Related benchmarks

TaskDatasetResultRank
Speech TranslationCoVoST-2 (test)--
46
Speech TranslationEuroparl-ST v1 (test)
BLEU29.28
8
Speech TranslationCoVOST 2 En-De (test)
chrF245.13
4
Speech TranslationCoVOST 2 En-Ca (test)
chrF249.1
4
Speech TranslationCoVOST 2 (En-Tr) (test)
chrF239.7
4
Speech TranslationCoVOST En-Cy 2 (test)
chrF251.5
4
Speech TranslationCoVOST En-Sl 2 (test)
chrF2 Score0.426
4
Speech TranslationEuroparl-ST En-De (test)
chrF252.37
4
Speech TranslationEuroparl-ST En-Fr (test)
chrF2 Score55.37
4
Showing 9 of 9 rows

Other info

Code

Follow for update