Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

BindWeave: Subject-Consistent Video Generation via Cross-Modal Integration

About

Diffusion Transformer has shown remarkable abilities in generating high-fidelity videos, delivering visually coherent frames and rich details over extended durations. However, existing video generation models still fall short in subject-consistent video generation due to an inherent difficulty in parsing prompts that specify complex spatial relationships, temporal logic, and interactions among multiple subjects. To address this issue, we propose BindWeave, a unified framework that handles a broad range of subject-to-video scenarios from single-subject cases to complex multi-subject scenes with heterogeneous entities. To bind complex prompt semantics to concrete visual subjects, we introduce an MLLM-DiT framework in which a pretrained multimodal large language model performs deep cross-modal reasoning to ground entities and disentangle roles, attributes, and interactions, yielding subject-aware hidden states that condition the diffusion transformer for high-fidelity subject-consistent video generation. Experiments on the OpenS2V benchmark demonstrate that our method achieves superior performance across subject consistency, naturalness, and text relevance in generated videos, outperforming existing open-source and commercial models.

Zhaoyang Li, Dongjun Qian, Kai Su, Qishuai Diao, Xiangyang Xia, Chang Liu, Wenfei Yang, Tianzhu Zhang, Zehuan Yuan• 2025

Related benchmarks

TaskDatasetResultRank
subject-to-video generationOpenS2V-Eval zero-shot (test)
Total Score57.61
16
Subject-to-videoOpenS2V Eval
Total Score57.61
11
Reference-to-Video GenerationOpenS2V-Eval 2025a
Total Score57.61
9
Subject-consistent Video GenerationUser Study
Subject Consistency3.94
7
Showing 4 of 4 rows

Other info

Follow for update