TASU: Text-Only Alignment for Speech Understanding
About
Recent advances in Speech Large Language Models (Speech LLMs) have paved the way for unified architectures across diverse speech understanding tasks. However, prevailing alignment paradigms rely heavily on large-scale audio-text paired data and computationally intensive training, yet often exhibit limited generalization to unseen domains or tasks. To address these limitations, we propose TASU (Text-only Alignment for Speech Understanding), a novel alignment paradigm that can leverage only unpaired text data to guide cross-modal alignment. Experiments show that TASU achieves competitive zero-shot speech recognition. Leveraging this property, it can further function as a pre-training stage in curriculum learning, enhancing domain generalization in speech recognition. Ultimately, TASU can extend its zero-shot generalization to a wide range of speech understanding tasks and notably outperforms prominent Speech LLMs including GLM-4-Voice and Step-Audio on the MMSU benchmark, establishing TASU as an efficient and scalable alignment paradigm for Speech LLMs.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Automatic Speech Recognition | LibriSpeech Other | WER9.9 | 96 | |
| Automatic Speech Recognition | LibriSpeech Clean | WER4.21 | 80 | |
| Automatic Speech Recognition | TED-LIUM 3 | WER13.23 | 45 | |
| Automatic Speech Recognition | SlideSpeech | WER18.7 | 6 | |
| Speech-to-text Translation | CoVoST2 en-zh | BLEU33.35 | 5 |