Ti-Audio: The First Multi-Dialectal End-to-End Speech LLM for Tibetan
About
Recent advances in Speech Large Language Models (Speech-LLMs) have made significant progress, greatly enhancing multimodal interaction capabilities.However, their application in low-resource and dialect-diverse environments still faces challenges. The severe scarcity of Tibetan data, coupled with the phonetic differences among its major dialects (\"U-Tsang, Amdo, and Kham), is a prime example of this challenge. This paper proposes Ti-Audio, the first multi-dialectal end-to-end Speech-LLM for Tibetan. To efficiently align speech and text, we introduce a Dynamic Q-Former Adapter that extracts essential acoustic features from variable-length speech, ensuring stable cross-modal alignment even with limited data. At the data level, we leverage mutual assistance among related dialects to alleviate data scarcity and employ a temperature-based sampling strategy to maximize this synergy. Experimental results demonstrate that Ti-Audio achieves state-of-the-art performance on Tibetan benchmarks for automatic speech recognition and speech translation. Our work validates the effectiveness of cross-dialectal cooperation and provides a scalable paradigm for the development of Speech-LLM in low-resource scenarios.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Speech Translation | Tibetan Dialects (Amdo, Kham, Ü-Tsang) | BLEU (Amdo)20.59 | 6 | |
| Machine Translation | Tibetan Dialects (Amdo, Kham, Ü-Tsang) | -- | 4 | |
| Automatic Speech Recognition | Tibetan Dialects (Amdo, Kham, Ü-Tsang) | Amdo WER14.25 | 3 | |
| Gender Recognition | Tibetan Speech | Precision99.6 | 3 | |
| Speaker Emotion Recognition | Speaker Emotion Recognition (SER) (test) | Precision (Anger)41.67 | 3 |