Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Gradient-Informed Training for Low-Resource Multilingual Speech Translation

About

In low-resource multilingual speech-to-text translation, uniform architectural sharing across languages frequently introduces representation conflicts that impede convergence. This work proposes a principled methodology to automatically determine layer-specific sharing patterns by mining training gradient information. Our approach employs three distinct analysis strategies: distance-based language clustering, self/cross-task divergence metrics for capacity allocation, and joint factorization coupled with canonical correlation analysis for subspace alignment. Extensive evaluation across four language pairs (using the SeamlessM4T-Medium architecture) demonstrates persistent improvements in translation quality metrics.

Ruiyan Sun, Satoshi Nakamura• 2026

Related benchmarks

TaskDatasetResultRank
Speech-to-text TranslationIWSLT Aeb-en 2025 (eval)
BLEU8.39
3
Speech-to-text TranslationIWSLT Gle-en 2025 (eval)
BLEU40.2
3
Speech-to-text TranslationIWSLT Bem-en 2025 (eval)
BLEU20.29
3
Speech-to-text TranslationIWSLT Est-en 2025 (eval)
BLEU Score20.3
1
Showing 4 of 4 rows

Other info

Follow for update