BindWeave: Subject-Consistent Video Generation via Cross-Modal Integration
About
Diffusion Transformer has shown remarkable abilities in generating high-fidelity videos, delivering visually coherent frames and rich details over extended durations. However, existing video generation models still fall short in subject-consistent video generation due to an inherent difficulty in parsing prompts that specify complex spatial relationships, temporal logic, and interactions among multiple subjects. To address this issue, we propose BindWeave, a unified framework that handles a broad range of subject-to-video scenarios from single-subject cases to complex multi-subject scenes with heterogeneous entities. To bind complex prompt semantics to concrete visual subjects, we introduce an MLLM-DiT framework in which a pretrained multimodal large language model performs deep cross-modal reasoning to ground entities and disentangle roles, attributes, and interactions, yielding subject-aware hidden states that condition the diffusion transformer for high-fidelity subject-consistent video generation. Experiments on the OpenS2V benchmark demonstrate that our method achieves superior performance across subject consistency, naturalness, and text relevance in generated videos, outperforming existing open-source and commercial models.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| subject-to-video generation | OpenS2V-Eval zero-shot (test) | Total Score57.61 | 16 | |
| Subject-to-video | OpenS2V Eval | Total Score57.61 | 11 | |
| Reference-to-Video Generation | OpenS2V-Eval 2025a | Total Score57.61 | 9 | |
| Subject-consistent Video Generation | User Study | Subject Consistency3.94 | 7 |