Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Sylber: Syllabic Embedding Representation of Speech from Raw Audio

About

Syllables are compositional units of spoken language that efficiently structure human speech perception and production. However, current neural speech representations lack such structure, resulting in dense token sequences that are costly to process. To bridge this gap, we propose a new model, Sylber, that produces speech representations with clean and robust syllabic structure. Specifically, we propose a self-supervised learning (SSL) framework that bootstraps syllabic embeddings by distilling from its own initial unsupervised syllabic segmentation. This results in a highly structured representation of speech features, offering three key benefits: 1) a fast, linear-time syllable segmentation algorithm, 2) efficient syllabic tokenization with an average of 4.27 tokens per second, and 3) novel phonological units suited for efficient spoken language modeling. Our proposed segmentation method is highly robust and generalizes to out-of-domain data and unseen languages without any tuning. By training token-to-speech generative models, fully intelligible speech can be reconstructed from Sylber tokens with a significantly lower bitrate than baseline SSL tokens. This suggests that our model effectively compresses speech into a compact sequence of tokens with minimal information loss. Lastly, we demonstrate that categorical perception-a linguistic phenomenon in speech perception-emerges naturally in Sylber, making the embedding space more categorical and sparse than previous speech features and thus supporting the high efficiency of our tokenization. Together, we present a novel SSL approach for representing speech as syllables, with significant potential for efficient speech tokenization and spoken language modeling.

Cheol Jun Cho, Nicholas Lee, Akshat Gupta, Dhruv Agarwal, Ethan Chen, Alan W Black, Gopala K. Anumanchipalli• 2024

Related benchmarks

TaskDatasetResultRank
Speech ReconstructionLibriTTS (test-other)
UTMOS3.91
44
Universal Speech Representation EvaluationSUPERB Benchmark
SID Accuracy0.5124
27
Automatic Speech RecognitionBemba (bem) low-resource
CER0.236
7
Automatic Speech RecognitionKorean (ko) low-resource
CER22
7
Automatic Speech RecognitionQuechua (que) low-resource
CER44.2
7
Speech ResynthesisLibriTTS (test-clean)
WER5.44
7
Speech ResynthesisFLEURS-R Spanish (test)
WER10.66
7
Speech ResynthesisFLEURS-R 20 Languages (test)
WER28.42
7
Singing Voice ResynthesisGTSinger (test)
F0-PCC0.78
7
Syllable discoveryLibriSpeech combined (test)
PC Purity73.5
4
Showing 10 of 11 rows

Other info

Follow for update