Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LauraTSE: Target Speaker Extraction using Auto-Regressive Decoder-Only Language Models

About

We propose LauraTSE, an Auto-Regressive Decoder-Only Language Model for Target Speaker Extraction built upon the LauraGPT backbone. LauraTSE employs a small-scale auto-regressive decoder-only language model that generates the initial layers of the target speech's discrete codec representations from the continuous embeddings of both the mixture and reference speech. These outputs serve as coarse-grained predictions. To refine them, a one-step encoder-only language model reconstructs the full codec representation by integrating information from both the mixture and the reference speech, adding fine-grained details. Experimental results show that our approach can achieve promising performance. Additionally, we conduct ablation studies to investigate the data scalability and the contribution of the encoder-only model.

Beilong Tang, Bang Zeng, Ming Li• 2025

Related benchmarks

TaskDatasetResultRank
Target Speaker ExtractionLibri2Mix Clean (test)
DNSMOS SIG3.61
9
Showing 1 of 1 rows

Other info

Follow for update