Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MM-Sonate: Multimodal Controllable Audio-Video Generation with Zero-Shot Voice Cloning

About

Joint audio-video generation aims to synthesize synchronized multisensory content, yet current unified models struggle with fine-grained acoustic control, particularly for identity-preserving speech. Existing approaches either suffer from temporal misalignment due to cascaded generation or lack the capability to perform zero-shot voice cloning within a joint synthesis framework. In this work, we present MM-Sonate, a multimodal flow-matching framework that unifies controllable audio-video joint generation with zero-shot voice cloning capabilities. Unlike prior works that rely on coarse semantic descriptions, MM-Sonate utilizes a unified instruction-phoneme input to enforce strict linguistic and temporal alignment. To enable zero-shot voice cloning, we introduce a timbre injection mechanism that effectively decouples speaker identity from linguistic content. Furthermore, addressing the limitations of standard classifier-free guidance in multimodal settings, we propose a noise-based negative conditioning strategy that utilizes natural noise priors to significantly enhance acoustic fidelity. Empirical evaluations demonstrate that MM-Sonate establishes new state-of-the-art performance in joint generation benchmarks, significantly outperforming baselines in lip synchronization and speech intelligibility, while achieving voice cloning fidelity comparable to specialized Text-to-Speech systems.

Chunyu Qiang, Jun Wang, Xiaopeng Wang, Kang Yin, Yuxin Guo• 2026

Related benchmarks

TaskDatasetResultRank
Text-to-Audio-Video GenerationVerse-Bench
MS0.48
16
Text-to-SpeechChinese TTS Evaluation ZH (test)
SIM-o69.1
8
Text-to-SpeechEnglish TTS Evaluation (EN) (test)
SIM-o0.604
8
Music GenerationSongEval (test)
Coherence3
4
Showing 4 of 4 rows

Other info

Follow for update