Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Audio-Omni: Extending Multi-modal Understanding to Versatile Audio Generation and Editing

About

Recent progress in multimodal models has spurred rapid advances in audio understanding, generation, and editing. However, these capabilities are typically addressed by specialized models, leaving the development of a truly unified framework that can seamlessly integrate all three tasks underexplored. While some pioneering works have explored unifying audio understanding and generation, they often remain confined to specific domains. To address this, we introduce Audio-Omni, the first end-to-end framework to unify generation and editing across general sound, music, and speech domains, with integrated multi-modal understanding capabilities. Our architecture synergizes a frozen Multimodal Large Language Model for high-level reasoning with a trainable Diffusion Transformer for high-fidelity synthesis. To overcome the critical data scarcity in audio editing, we construct AudioEdit, a new large-scale dataset comprising over one million meticulously curated editing pairs. Extensive experiments demonstrate that Audio-Omni achieves state-of-the-art performance across a suite of benchmarks, outperforming prior unified approaches while achieving performance on par with or superior to specialized expert models. Beyond its core capabilities, Audio-Omni exhibits remarkable inherited capabilities, including knowledge-augmented reasoning generation, in-context generation, and zero-shot cross-lingual control for audio generation, highlighting a promising direction toward universal generative audio intelligence. The code, model, and dataset will be publicly released on https://zeyuet.github.io/Audio-Omni.

Zeyue Tian, Binxin Yang, Zhaoyang Liu, Jiexuan Zhang, Ruibin Yuan, Hubery Yin, Qifeng Chen, Chen Li, Jing Lv, Wei Xue, Yike Guo• 2026

Related benchmarks

TaskDatasetResultRank
Text-to-Audio GenerationAudioCaps (test)
FAD1.86
154
Multimodal UnderstandingMMMU
MMMU Score63.3
59
Text-to-SpeechSeed-TTS EN
WER1.77
20
Video-to-AudioVGGSound (test)--
20
Video-to-Music GenerationV2M-bench (test)
Fréchet Audio Distance (FAD)1.58
12
Multimodal UnderstandingMMSU
MMSU Score56.83
7
Audio EditAudio Edit (test)
Feature Distance (FD)3.27
6
Text-to-MusicMusicaps (test)
FAD1.94
6
Audio EditingAudioEdit
Overlap Score (OVL)79.8
3
Text-to-MusicT2M Evaluation Set
OVL82.7
3
Showing 10 of 13 rows

Other info

GitHub

Follow for update