Differentiable K-means for Fully-optimized Discrete Token-based ASR
About
Recent studies have highlighted the potential of discrete tokens derived from self-supervised learning (SSL) models for various speech-related tasks. These tokens serve not only as substitutes for text in language modeling but also as intermediate representations for tasks such as automatic speech recognition (ASR). However, discrete tokens are typically obtained via k-means clustering of SSL features independently of downstream tasks, making them suboptimal for specific applications. This paper proposes the use of differentiable k-means, enabling the joint optimization of tokenization and downstream tasks. This approach enables the fine-tuning of the SSL parameters and learning weights for outputs from multiple SSL layers. Experiments were conducted with ASR as a downstream task. ASR accuracy successfully improved owing to the optimized tokens. The acquired tokens also exhibited greater purity of phonetic information, which were found to be useful even in speech resynthesis.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Speaker Identification | VoxCeleb1 | Accuracy20.6 | 58 | |
| Automatic Speech Recognition | LibriSpeech 100h (test-clean) | WER4 | 32 | |
| Automatic Speech Recognition | LibriSpeech 100h (test-other) | Word Error Rate7 | 10 | |
| Lexical and syntactic knowledge assessment | Zero Resource Speech Challenge | sWUGGY70 | 6 | |
| Speech continuation quality assessment | LibriLight Speech Continuation | GenPPL5.6 | 6 | |
| Emotion Recognition | RAVDESS (speaker-independent) | Accuracy41.7 | 6 | |
| Voice Conversion | TIMIT OOD | F0 Correlation0.385 | 6 | |
| Voice Conversion | Expresso OOD | F0 Correlation0.391 | 6 | |
| Sentiment and speaker consistency assessment | SALMon | Sentiment Accuracy61 | 6 | |
| Speech Reconstruction | LJSpeech ID | MCD5.77 | 6 |