Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Privacy-Preserving End-to-End Full-Duplex Speech Dialogue Models

About

End-to-end full-duplex speech models feed user audio through an always-on LLM backbone, yet the speaker privacy implications of their hidden representations remain unexamined. Following the VoicePrivacy 2024 protocol with a lazy-informed attacker, we show that the hidden states of SALM-Duplex and Moshi leak substantial speaker identity across all transformer layers. Layer-wise and turn-wise analyses reveal that leakage persists across all layers, with SALM-Duplex showing stronger leakage in early layers while Moshi leaks uniformly, and that Linkability rises sharply within the first few turns. We propose two streaming anonymization setups using Stream-Voice-Anon: a waveform-level front-end (Anon-W2W) and a feature-domain replacement (Anon-W2F). Anon-W2F raises EER by over 3.5x relative to the discrete encoder baseline (11.2% to 41.0%), approaching the 50% random-chance ceiling, while Anon-W2W retains 78-93% of baseline sBERT across setups with sub-second response latency (FRL under 0.8 s).

Nikita Kuzmin, Tao Zhong, Jiajun Deng, Yingke Zhu, Tristan Tsoi, Tianxiang Cao, Simon Lui, Kong Aik Lee, Eng Siong Chng• 2026

Related benchmarks

TaskDatasetResultRank
Dialogue QualityVPC 2024 (evaluation)
sBLEU (S2T)7.18
6
EfficiencyVPC 2024 (evaluation set)
RTFx1.6
6
PrivacyVPC 2024 (evaluation)
EER6.4
6
Showing 3 of 3 rows

Other info

Follow for update