Ming-UniAudio: Speech LLM for Joint Understanding, Generation and Editing with Unified Representation
About
Existing speech models suffer from competing requirements on token representations by understanding and generation tasks. This discrepancy in representation prevents speech language models from performing instruction-based free-form editing. To solve this challenge, we introduce a novel framework that unifies speech understanding, generation, and editing. The core of our unified model is a unified continuous speech tokenizer MingTok-Audio, the first continuous tokenizer to effectively integrate semantic and acoustic features, which makes it suitable for both understanding and generation tasks. Based on this unified continuous audio tokenizer, we developed the speech language model Ming-UniAudio, which achieved a balance between generation and understanding capabilities. Ming-UniAudio sets new state-of-the-art (SOTA) records on 8 out of 12 metrics on the ContextASR benchmark. Notably, for Chinese voice cloning, it achieves a highly competitive Seed-TTS-WER of 0.95. Leveraging this foundational model, we further trained a dedicated speech editing model Ming-UniAudio-Edit, the first speech language model that enables universal, free-form speech editing guided solely by natural language instructions, handling both semantic and acoustic modifications without timestamp condition. To rigorously assess the editing capability and establish a foundation for future research, we introduce Ming-Freeform-Audio-Edit, the first comprehensive benchmark tailored for instruction-based free-form speech editing, featuring diverse scenarios and evaluation dimensions spanning semantic correctness, acoustic quality, and instruction alignment. We open-sourced the continuous audio tokenizer, the unified foundational model, and the free-form instruction-based editing model to facilitate the development of unified audio understanding, generation, and manipulation.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Speech Emotion Recognition | MELD In-Domain v1 (test) | Accuracy41.94 | 14 | |
| Speech Emotion Recognition | Emo-Emilia Zero-Shot v1 (test) | Accuracy (ACC)21.07 | 13 | |
| Speech Emotion Recognition | ESD In-Domain v1 (test) | ACC29.93 | 13 | |
| Speech Emotion Recognition | EMOVO Zero-Shot v1 (test) | Accuracy21.45 | 13 | |
| Speech Emotion Recognition | EmoDB Zero-Shot v1 (test) | Accuracy37.31 | 12 | |
| Speech Emotion Recognition | RAVDESS In-Domain v1 (test) | Accuracy36.49 | 12 | |
| Speech Emotion Recognition | CASE Zero-Shot v1 (test) | Accuracy (ACC)29.24 | 12 | |
| Speech Editing | RealEdit | WER9.98 | 8 | |
| Insertion Speech Editing | Ming-Freeform-Audio-Edit-Benchmark basic | WER6.63 | 5 | |
| Insertion Speech Editing | Ming-Freeform-Audio-Edit-Benchmark (full) | WER7.59 | 5 |