Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Erasing Your Voice Before It's Heard: Training-free Speaker Unlearning for Zero-shot Text-to-Speech

About

Modern zero-shot text-to-speech (TTS) models offer unprecedented expressivity but also pose serious crime risks, as they can synthesize voices of individuals who never consented. In this context, speaker unlearning aims to prevent the generation of specific speaker identities upon request. Existing approaches, reliant on retraining, are costly and limited to speakers seen in the training set. We present TruS, a training-free speaker unlearning framework that shifts the paradigm from data deletion to inference-time control. TruS steers identity-specific hidden activations to suppress target speakers while preserving other attributes (e.g., prosody and emotion). Experimental results show that TruS effectively prevents voice generation on both seen and unseen opt-out speakers, establishing a scalable safeguard for speech synthesis. The demo and code are available on http://mmai.ewha.ac.kr/trus.

Myungjin Lee, Eunji Shin, Jiyoung Lee• 2026

Related benchmarks

TaskDatasetResultRank
Speaker UnlearningLibriSpeech clean Retain set (test)
WER1.95
5
Speaker UnlearningEmilia (Seen opt-out set (-SO))
WER3.25
5
Speaker UnlearningLibriSpeech (unseen opt-out set (-UO))
WER (UO)3.26
2
Zero-shot Text-to-SpeechCREMA-D unseen opt-out set (-UO)
SIM-UO13.1
2
Showing 4 of 4 rows

Other info

Follow for update