Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Modality Adaption or Regularization? A Case Study on End-to-End Speech Translation

About

Pre-training and fine-tuning is a paradigm for alleviating the data scarcity problem in end-to-end speech translation (E2E ST). The commonplace "modality gap" between speech and text data often leads to inconsistent inputs between pre-training and fine-tuning. However, we observe that this gap occurs in the early stages of fine-tuning, but does not have a major impact on the final performance. On the other hand, we find that there has another gap, which we call the "capacity gap": high resource tasks (such as ASR and MT) always require a large model to fit, when the model is reused for a low resource task (E2E ST), it will get a sub-optimal performance due to the over-fitting. In a case study, we find that the regularization plays a more important role than the well-designed modality adaption method, which achieves 29.0 for en-de and 40.3 for en-fr on the MuST-C dataset. Code and models are available at https://github.com/hannlp/TAB.

Yuchen Han, Chen Xu, Tong Xiao, Jingbo Zhu• 2023

Related benchmarks

TaskDatasetResultRank
Speech TranslationMuST-C (tst-COMMON)--
20
Speech TranslationMuST-C COMMON (test)
BLEU29
12
Showing 2 of 2 rows

Other info

Code

Follow for update