Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AudioChat: Unified Audio Storytelling, Editing, and Understanding with Transfusion Forcing

About

Despite recent breakthroughs, audio foundation models struggle in processing complex multi-source acoustic scenes. We refer to this challenging domain as audio stories, which can have multiple speakers and background/foreground sound effects. Compared to traditional audio processing tasks, audio stories introduce new layers of semantic, temporal, and physical complexity. To address this challenge, we propose AudioChat, a framework for developing audio foundation models that can generate, edit, and understand audio stories. AudioChat introduces a new paradigm in which LLM-based toolcalling agents simulate interactions between users and the system, and these simulated dialogues are used as training data. We also introduce a novel Audio Transfusion Forcing objective to train the AudioChat model, allowing it to simultaneously decompose high-level instructions via structured chain-of-thought reasoning and perform interactive multi-turn audio understanding/generation. To evaluate generation and editing performance, we develop three new metrics that directly measure task performance instead of relying upon distribution-based scoring. We highly encourage readers to visit our demo to better understand the capabilities of AudioChat: https://wanchichen.github.io/audiochat/.

William Chen, Prem Seetharaman, Rithesh Kumar, Oriol Nieto, Shinji Watanabe, Justin Salamon, Zeyu Jin• 2026

Related benchmarks

TaskDatasetResultRank
Audio StorytellingStoryGen-Eval (test)
KAD2.52
4
Speaker DiarizationStoryGen Eval
tcpWER9.7
3
Audio CaptioningStoryGen Eval
multiFLAM86.3
2
Showing 3 of 3 rows

Other info

Follow for update