Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VideoTetris: Towards Compositional Text-to-Video Generation

About

Diffusion models have demonstrated great success in text-to-video (T2V) generation. However, existing methods may face challenges when handling complex (long) video generation scenarios that involve multiple objects or dynamic changes in object numbers. To address these limitations, we propose VideoTetris, a novel framework that enables compositional T2V generation. Specifically, we propose spatio-temporal compositional diffusion to precisely follow complex textual semantics by manipulating and composing the attention maps of denoising networks spatially and temporally. Moreover, we propose an enhanced video data preprocessing to enhance the training data regarding motion dynamics and prompt understanding, equipped with a new reference frame attention mechanism to improve the consistency of auto-regressive video generation. Extensive experiments demonstrate that our VideoTetris achieves impressive qualitative and quantitative results in compositional T2V generation. Code is available at: https://github.com/YangLing0818/VideoTetris

Ye Tian, Ling Yang, Haotian Yang, Yuan Gao, Yufan Deng, Jingmin Chen, Xintao Wang, Zhaochen Yu, Xin Tao, Pengfei Wan, Di Zhang, Bin Cui• 2024

Related benchmarks

TaskDatasetResultRank
Text-to-Video GenerationT2V-CompBench
Consistency Attribute Score0.7125
22
Text-to-Video GenerationCompositional Prompts
VBLIP-VQA Score0.5563
7
Layout-guided video generationYouTubeVIS 2021 (test val)
FVD590
5
Long Video GenerationProgressive Compositional Prompts
VBLIP-VQA0.4839
3
Showing 4 of 4 rows

Other info

Code

Follow for update