PAST: Phonetic-Acoustic Speech Tokenizer
About
We present PAST, a novel end-to-end framework that jointly models phonetic information alongside signal reconstruction, eliminating the need for external pretrained models. Unlike previous approaches that rely on pretrained self-supervised models, PAST employs supervised phonetic data, directly integrating domain knowledge into the tokenization process via auxiliary tasks. Additionally, we introduce a streamable, causal variant of PAST, enabling real-time speech applications. Results demonstrate that PAST surpasses existing evaluated baseline tokenizers across common evaluation metrics, including phonetic representation and speech reconstruction. Notably, PAST also achieves superior performance when serving as a speech representation for speech language models, further highlighting its effectiveness as a foundation for spoken language generation. To foster further research, we release the full implementation. For code, model checkpoints, and samples see: https://pages.cs.huji.ac.il/adiyoss-lab/PAST
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Automatic Speech Recognition | LibriSpeech (test-other) | WER37.02 | 966 | |
| Automatic Speech Recognition | LibriSpeech clean (test) | WER7.9 | 833 | |
| Text-to-Speech | Seed-TTS (eval) | WER9 | 39 | |
| Voice Conversion | VCTK | WER22.9 | 21 | |
| Speech Reconstruction | Salmon Sentiment Consistency emotional 2025b (OOD) | WER3 | 18 | |
| Speech Recognition | Switchboard | WER28.9 | 18 | |
| Intent Detection | SLURP | Accuracy59.5 | 16 | |
| Speech Reconstruction | LibriSpeech clean (test) | WER2.1 | 15 | |
| Text-to-Speech | LibriTTS clean (test) | WER0.08 | 15 | |
| Audio Encoding and Decoding Efficiency | NVIDIA A6000 Efficiency Benchmark | RTF (Encoding)0.0012 | 12 |