Gradient-Informed Training for Low-Resource Multilingual Speech Translation
About
In low-resource multilingual speech-to-text translation, uniform architectural sharing across languages frequently introduces representation conflicts that impede convergence. This work proposes a principled methodology to automatically determine layer-specific sharing patterns by mining training gradient information. Our approach employs three distinct analysis strategies: distance-based language clustering, self/cross-task divergence metrics for capacity allocation, and joint factorization coupled with canonical correlation analysis for subspace alignment. Extensive evaluation across four language pairs (using the SeamlessM4T-Medium architecture) demonstrates persistent improvements in translation quality metrics.
Ruiyan Sun, Satoshi Nakamura• 2026
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Speech-to-text Translation | IWSLT Aeb-en 2025 (eval) | BLEU8.39 | 3 | |
| Speech-to-text Translation | IWSLT Gle-en 2025 (eval) | BLEU40.2 | 3 | |
| Speech-to-text Translation | IWSLT Bem-en 2025 (eval) | BLEU20.29 | 3 | |
| Speech-to-text Translation | IWSLT Est-en 2025 (eval) | BLEU Score20.3 | 1 |
Showing 4 of 4 rows