Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PAST: Phonetic-Acoustic Speech Tokenizer

About

We present PAST, a novel end-to-end framework that jointly models phonetic information alongside signal reconstruction, eliminating the need for external pretrained models. Unlike previous approaches that rely on pretrained self-supervised models, PAST employs supervised phonetic data, directly integrating domain knowledge into the tokenization process via auxiliary tasks. Additionally, we introduce a streamable, causal variant of PAST, enabling real-time speech applications. Results demonstrate that PAST surpasses existing evaluated baseline tokenizers across common evaluation metrics, including phonetic representation and speech reconstruction. Notably, PAST also achieves superior performance when serving as a speech representation for speech language models, further highlighting its effectiveness as a foundation for spoken language generation. To foster further research, we release the full implementation. For code, model checkpoints, and samples see: https://pages.cs.huji.ac.il/adiyoss-lab/PAST

Nadav Har-Tuv, Or Tal, Yossi Adi• 2025

Related benchmarks

TaskDatasetResultRank
Automatic Speech RecognitionLibriSpeech (test-other)
WER37.02
966
Automatic Speech RecognitionLibriSpeech clean (test)
WER7.9
833
Text-to-SpeechSeed-TTS (eval)
WER9
39
Voice ConversionVCTK
WER22.9
21
Speech ReconstructionSalmon Sentiment Consistency emotional 2025b (OOD)
WER3
18
Speech RecognitionSwitchboard
WER28.9
18
Intent DetectionSLURP
Accuracy59.5
16
Speech ReconstructionLibriSpeech clean (test)
WER2.1
15
Text-to-SpeechLibriTTS clean (test)
WER0.08
15
Audio Encoding and Decoding EfficiencyNVIDIA A6000 Efficiency Benchmark
RTF (Encoding)0.0012
12
Showing 10 of 15 rows

Other info

Follow for update