SpidR: Learning Fast and Stable Linguistic Units for Spoken Language Models Without Supervision
About
The parallel advances in language modeling and speech representation learning have raised the prospect of learning language directly from speech without textual intermediates. This requires extracting semantic representations directly from speech. Our contributions are threefold. First, we introduce SpidR, a self-supervised speech representation model that efficiently learns representations with highly accessible phonetic information, which makes it particularly suited for textless spoken language modeling. It is trained on raw waveforms using a masked prediction objective combined with self-distillation and online clustering. The intermediate layers of the student model learn to predict assignments derived from the teacher's intermediate layers. This learning objective stabilizes the online clustering procedure compared to previous approaches, resulting in higher quality codebooks. SpidR outperforms wav2vec 2.0, HuBERT, WavLM, and DinoSR on downstream language modeling benchmarks (sWUGGY, sBLIMP, tSC). Second, we systematically evaluate across models and layers the correlation between speech unit quality (ABX, PNMI) and language modeling performance, validating these metrics as reliable proxies. Finally, SpidR significantly reduces pretraining time compared to HuBERT, requiring only one day of pretraining on 16 GPUs, instead of a week. This speedup is enabled by the pretraining method and an efficient codebase, which allows faster iteration and easier experimentation. We open-source the training code and model checkpoints at https://github.com/facebookresearch/spidr.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Automatic Speech Recognition | LibriSpeech (test-other) | WER15.9 | 966 | |
| Automatic Speech Recognition | LibriSpeech (dev-other) | WER15.8 | 411 | |
| Automatic Speech Recognition | Librispeech (test-clean) | WER6.3 | 84 | |
| Speech Recognition | LibriSpeech clean (dev) | WER0.061 | 59 | |
| Discrete unit quality evaluation | LibriSpeech 960h | ABX6.31 | 9 | |
| Spoken Language Modeling | Libri-Light 6k | sWUGGY (all)71.89 | 9 | |
| Speech Processing | SUPERB | PER4.86 | 9 | |
| Phonetic Discriminability (ABX) | LibriSpeech clean (dev) | ABX (within-speaker)3.32 | 7 | |
| Phonetic Discriminability (ABX) | LibriSpeech other (dev) | ABX (Within Speaker)3.74 | 7 | |
| Self-supervised pretraining | Libri-Light 6k | Pretraining Time (hr)23 | 7 |