Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ZeroSyl: Simple Zero-Resource Syllable Tokenization for Spoken Language Modeling

About

Pure speech language models aim to learn language directly from raw audio without textual resources. A key challenge is that discrete tokens from self-supervised speech encoders result in excessively long sequences, motivating recent work on syllable-like units. However, methods like Sylber and SyllableLM rely on intricate multi-stage training pipelines. We propose ZeroSyl, a simple training-free method to extract syllable boundaries and embeddings directly from a frozen WavLM model. Using L2 norms of features in WavLM's intermediate layers, ZeroSyl achieves competitive syllable segmentation performance. The resulting segments are mean-pooled, discretized using K-means, and used to train a language model. ZeroSyl outperforms prior syllabic tokenizers across lexical, syntactic, and narrative benchmarks. Scaling experiments show that while finer-grained units are beneficial for lexical tasks, our discovered syllabic units exhibit better scaling behavior for syntactic modeling.

Nicol Visser, Simon Malan, Danel Slabbert, Herman Kamper• 2026

Related benchmarks

TaskDatasetResultRank
Spoken Language ModelingsLM21 (dev)
sWUGGY (all)68
4
Syllable discoveryLibriSpeech combined (test)
PC Purity80.5
4
Showing 2 of 2 rows

Other info

Follow for update