Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Source and Target Bidirectional Knowledge Distillation for End-to-end Speech Translation

About

A conventional approach to improving the performance of end-to-end speech translation (E2E-ST) models is to leverage the source transcription via pre-training and joint training with automatic speech recognition (ASR) and neural machine translation (NMT) tasks. However, since the input modalities are different, it is difficult to leverage source language text successfully. In this work, we focus on sequence-level knowledge distillation (SeqKD) from external text-based NMT models. To leverage the full potential of the source language information, we propose backward SeqKD, SeqKD from a target-to-source backward NMT model. To this end, we train a bilingual E2E-ST model to predict paraphrased transcriptions as an auxiliary task with a single decoder. The paraphrases are generated from the translations in bitext via back-translation. We further propose bidirectional SeqKD in which SeqKD from both forward and backward NMT models is combined. Experimental evaluations on both autoregressive and non-autoregressive models show that SeqKD in each direction consistently improves the translation performance, and the effectiveness is complementary regardless of the model capacity.

Hirofumi Inaguma, Tatsuya Kawahara, Shinji Watanabe• 2021

Related benchmarks

TaskDatasetResultRank
Speech TranslationCoVoST-2 (test)--
46
Speech TranslationEuroparl-ST v1 (test)
BLEU28.79
8
Speech TranslationEuroparl-ST En-De (test)
chrF251.43
4
Speech TranslationEuroparl-ST En-Fr (test)
chrF2 Score54.97
4
Speech TranslationCoVOST 2 En-De (test)
chrF244.13
4
Speech TranslationCoVOST 2 En-Ca (test)
chrF248.17
4
Speech TranslationCoVOST 2 (En-Tr) (test)
chrF238.53
4
Speech TranslationCoVOST En-Cy 2 (test)
chrF250.67
4
Speech TranslationCoVOST En-Sl 2 (test)
chrF2 Score0.4173
4
Showing 9 of 9 rows

Other info

Follow for update