Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Lamer-SSL: Layer-aware Mixture of LoRA Experts for Continual Multilingual Expansion of Self-supervised Models without Forgetting

About

Despite their impressive performance, self-supervised speech models often struggle to generalize to new languages and tend to forget previously acquired knowledge during continual training. To address this, we propose Lamer-SSL, a parameter-efficient framework that integrates a Layer-Aware MixturE of LoRA Experts (Lamer) module with a replay strategy. The Lamer module enables flexible balancing between shared and language-specific representations, while layer-aware expert allocation assigns more experts to deeper layers where semantic information is richer. Meanwhile, the replay strategy retains prior knowledge using minimal data, mitigating forgetting during continual training. Experiments on automatic speech recognition (ASR) and language identification (LID) demonstrate that Lamer-SSL extends self-supervised models to new languages effectively while maintaining strong performance on previously learned languages with only 2.14% parameters being trainable.

Jing Xu, Minglin Wu, Xueyuan Chen, Xixin Wu, Helen Meng• 2026

Related benchmarks

TaskDatasetResultRank
Automatic Speech RecognitionCommonVoice
CER (English)9.9
6
Automatic Speech RecognitionFleurs
CER (eng)9.1
6
Language IdentificationCommonVoice & Fleurs
Accuracy (eng)100
6
Showing 3 of 3 rows

Other info

Follow for update