Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MMEDIT: A Unified Framework for Multi-Type Audio Editing via Audio Language Model

About

Text-guided audio editing aims to modify specific acoustic events while strictly preserving non-target content. Despite recent progress, existing approaches remain fundamentally limited. Training-free methods often suffer from signal degradation caused by diffusion inversion, while training-based methods, although achieving higher generation quality, are severely constrained by the scarcity of high-quality paired data and task formulations that cover only a narrow subset of editing operations. In addition, standard architectures typically decouple text and audio processing, limiting the ability to align instructions with specific acoustic contexts. To address these challenges, we propose MMEdit, an audio-language-model-driven framework for unified audio editing. We systematically extend task definitions to cover a comprehensive range of editing operations, including addition, replacement, removal, reordering, and attribute modification. Furthermore, we design a scalable data synthesis pipeline to construct large-scale paired datasets with fine-grained event-level annotations. To capture complex editing semantics, we integrate a Qwen2-Audio encoder with an MMDiT-based generator, enabling precise cross-modal alignment and localized editing. Experimental results demonstrate that our method achieves superior editing localization accuracy, robust instruction following, and high fidelity in non-edited regions.

Ye Tao, Wen Wu, Chao Zhang, Mengyue Wu, Shuai Wang, Xuenan Xu• 2025

Related benchmarks

TaskDatasetResultRank
Audio EditingSynthetic (test)
LSD1.209
12
Audio EditingAudioCaps
R-MOS4.43
12
Showing 2 of 2 rows

Other info

Follow for update