Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Residual Tokens Enhance Masked Autoencoders for Speech Modeling

About

Recent speech modeling relies on explicit attributes such as pitch, content, and speaker identity, but these alone cannot capture the full richness of natural speech. We introduce RT-MAE, a novel masked autoencoder framework that augments the supervised attributes-based modeling with unsupervised residual trainable tokens, designed to encode the information not explained by explicit labeled factors (e.g., timbre variations, noise, emotion etc). Experiments show that RT-MAE improves reconstruction quality, preserving content and speaker similarity while enhancing expressivity. We further demonstrate its applicability to speech enhancement, removing noise at inference while maintaining controllability and naturalness.

Samir Sadok, St\'ephane Lathuili\`ere, Xavier Alameda-Pineda• 2026

Related benchmarks

TaskDatasetResultRank
Speech DenoisingLibriMix (test)
N-MOS4.25
5
Speech SynthesisLibriSpeech 360 Clean (test)
STOI0.82
3
Speech SynthesisEmoV-DB (test)
STOI0.76
3
Showing 3 of 3 rows

Other info

Follow for update