TVTSyn: Content-Synchronous Time-Varying Timbre for Streaming Voice Conversion and Anonymization
About
Real-time voice conversion and speaker anonymization require causal, low-latency synthesis without sacrificing intelligibility or naturalness. Current systems have a core representational mismatch: content is time-varying, while speaker identity is injected as a static global embedding. We introduce a streamable speech synthesizer that aligns the temporal granularity of identity and content via a content-synchronous, time-varying timbre (TVT) representation. A Global Timbre Memory expands a global timbre instance into multiple compact facets; frame-level content attends to this memory, a gate regulates variation, and spherical interpolation preserves identity geometry while enabling smooth local changes. In addition, a factorized vector-quantized bottleneck regularizes content to reduce residual speaker leakage. The resulting system is streamable end-to-end, with <80 ms GPU latency. Experiments show improvements in naturalness, speaker transfer, and anonymization compared to SOTA streaming baselines, establishing TVT as a scalable approach for privacy-preserving and expressive speech synthesis under strict latency budgets.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Speaker Anonymization | VoicePrivacy Challenge 2024 (test) | WER5.35 | 14 | |
| Accent Neutralization | L2-ARCTIC Indian English speakers | CNA100 | 6 | |
| Speech Quality Assessment | L2-ARCTIC Indian English speakers | NISQA-MOS4.46 | 6 | |
| Speaker Similarity Analysis | L2-ARCTIC Indian English speakers | SpkSim0.86 | 3 |