Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Any-to-Any Generation via Composable Diffusion

About

We present Composable Diffusion (CoDi), a novel generative model capable of generating any combination of output modalities, such as language, image, video, or audio, from any combination of input modalities. Unlike existing generative AI systems, CoDi can generate multiple modalities in parallel and its input is not limited to a subset of modalities like text or image. Despite the absence of training datasets for many combinations of modalities, we propose to align modalities in both the input and output space. This allows CoDi to freely condition on any input combination and generate any group of modalities, even if they are not present in the training data. CoDi employs a novel composable generation strategy which involves building a shared multimodal space by bridging alignment in the diffusion process, enabling the synchronized generation of intertwined modalities, such as temporally aligned video and audio. Highly customizable and flexible, CoDi achieves strong joint-modality generation quality, and outperforms or is on par with the unimodal state-of-the-art for single-modality synthesis. The project page with demonstrations and code is at https://codi-gen.github.io

Zineng Tang, Ziyi Yang, Chenguang Zhu, Michael Zeng, Mohit Bansal• 2023

Related benchmarks

TaskDatasetResultRank
Image CaptioningMS COCO Karpathy (test)--
682
Text-to-Image GenerationGenEval
Overall Score31
506
Audio ClassificationESC-50
Accuracy21.05
374
Text-to-Image GenerationGenEval
GenEval Score38
360
Text-to-Audio GenerationAudioCaps (test)
FAD1.8
154
Audio CaptioningAudioCaps (test)
CIDEr7.9
140
Text-to-Image GenerationMS-COCO
FID11.26
131
Video CaptioningMSRVTT
CIDEr74.4
107
Text-to-Video GenerationMSR-VTT (test)
CLIP Similarity0.289
85
Video CaptioningVATEX
CIDEr74.4
76
Showing 10 of 55 rows

Other info

Code

Follow for update