Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MiLorE-SSL: Scaling Multilingual Capabilities in Self-Supervised Models without Forgetting

About

Self-supervised learning (SSL) has greatly advanced speech representation learning, but multilingual SSL models remain constrained to languages encountered during pretraining. Retraining from scratch to incorporate new languages is computationally expensive, while sequential training without migitation strategies often leads to catastrophic forgetting. To address this, we propose MiLorE-SSL, a lightweight framework that combines LoRA modules with a soft mixture-of-experts (MoE) mechanism for efficient continual multilingual training. LoRA provides efficient low-rank adaptation, while soft MoE promotes flexible expert sharing across languages, reducing cross-lingual interference. To further mitigate forgetting, we introduce limited replay data from existing languages, avoiding reliance on large historical corpora. Experiments on ML-SUPERB demonstrate that MiLorE-SSL achieves strong performance in new languages and improves the ability in existing ones with only 2.14% trainable parameters.

Jing Xu, Minglin Wu, Xueyuan Chen, Xixin Wu, Helen Meng• 2026

Related benchmarks

TaskDatasetResultRank
Automatic Speech RecognitionFleurs
WER10.4
56
Speech RecognitionCommon Voice--
17
LIDCommonVoice
Accuracy (eng)99.4
8
Showing 3 of 3 rows

Other info

Follow for update