Streaming Sequence-to-Sequence Learning with Delayed Streams Modeling
About
We introduce Delayed Streams Modeling (DSM), a flexible formulation for streaming, multimodal sequence-to-sequence learning. Sequence-to-sequence generation is often cast in an offline manner, where the model consumes the complete input sequence before generating the first output timestep. Alternatively, streaming sequence-to-sequence rely on learning a policy for choosing when to advance on the input stream, or write to the output stream. DSM instead models already time-aligned streams with a decoder-only language model. By moving the alignment to a pre-processing step,and introducing appropriate delays between streams, DSM provides streaming inference of arbitrary output sequences, from any input combination, making it applicable to many sequence-to-sequence problems. In particular, given text and audio streams, automatic speech recognition (ASR) corresponds to the text stream being delayed, while the opposite gives a text-to-speech (TTS) model. We perform extensive experiments for these two major sequence-to-sequence tasks, showing that DSM provides state-of-the-art performance and latency while supporting arbitrary long sequences, being even competitive with offline baselines. Code, samples and demos are available at https://github.com/kyutai-labs/delayed-streams-modeling
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-Speech | Seed-TTS en (test) | WER1.34 | 90 | |
| Text-to-Speech | LibriSpeech PC clean (test) | WER1.63 | 31 | |
| Text-to-Speech | EmergentTTS (eval) | Overall WER9.1 | 25 | |
| Automatic Speech Recognition | TED-LIUM | WER2.9 | 18 | |
| Speech Recognition | Rev 16 | WER12.3 | 9 | |
| Text-to-Speech | Emilia EN speaking-rate | MUSHRA Score60.5 | 9 | |
| Automatic Speech Recognition | Earnings21 | WER10.6 | 5 | |
| Automatic Speech Recognition | Meanwhile | WER5.7 | 5 |