Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

BlobGEN-Vid: Compositional Text-to-Video Generation with Blob Video Representations

About

Existing video generation models struggle to follow complex text prompts and synthesize multiple objects, raising the need for additional grounding input for improved controllability. In this work, we propose to decompose videos into visual primitives - blob video representation, a general representation for controllable video generation. Based on blob conditions, we develop a blob-grounded video diffusion model named BlobGEN-Vid that allows users to control object motions and fine-grained object appearance. In particular, we introduce a masked 3D attention module that effectively improves regional consistency across frames. In addition, we introduce a learnable module to interpolate text embeddings so that users can control semantics in specific frames and obtain smooth object transitions. We show that our framework is model-agnostic and build BlobGEN-Vid based on both U-Net and DiT-based video diffusion models. Extensive experimental results show that BlobGEN-Vid achieves superior zero-shot video generation ability and state-of-the-art layout controllability on multiple benchmarks. When combined with an LLM for layout planning, our framework even outperforms proprietary text-to-video generators in terms of compositional accuracy.

Weixi Feng, Chao Liu, Sifei Liu, William Yang Wang, Arash Vahdat, Weili Nie• 2025

Related benchmarks

TaskDatasetResultRank
Text-to-Video GenerationT2V-CompBench
Consistency Attribute Score0.74
22
Text-to-Video GenerationTC-Bench
Attribute Transition TCR15.39
8
Layout-guided video generationYouTubeVIS 2021 (test val)
FVD317
5
Multi-view indoor scene generationScanNet++ (test)
FID27.94
4
Showing 4 of 4 rows

Other info

Code

Follow for update