Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Bridging the Gap between Pre-Training and Fine-Tuning for End-to-End Speech Translation

About

End-to-end speech translation, a hot topic in recent years, aims to translate a segment of audio into a specific language with an end-to-end model. Conventional approaches employ multi-task learning and pre-training methods for this task, but they suffer from the huge gap between pre-training and fine-tuning. To address these issues, we propose a Tandem Connectionist Encoding Network (TCEN) which bridges the gap by reusing all subnets in fine-tuning, keeping the roles of subnets consistent, and pre-training the attention module. Furthermore, we propose two simple but effective methods to guarantee the speech encoder outputs and the MT encoder inputs are consistent in terms of semantic representation and sequence length. Experimental results show that our model outperforms baselines 2.2 BLEU on a large benchmark dataset.

Chengyi Wang, Yu Wu, Shujie Liu, Zhenglu Yang, Ming Zhou• 2019

Related benchmarks

TaskDatasetResultRank
Speech Translationlibri-trans (test)
Detokenized BLEU (case-sensitive)17.1
14
Showing 1 of 1 rows

Other info

Follow for update