PhyT2V: LLM-Guided Iterative Self-Refinement for Physics-Grounded Text-to-Video Generation
About
Text-to-video (T2V) generation has been recently enabled by transformer-based diffusion models, but current T2V models lack capabilities in adhering to the real-world common knowledge and physical rules, due to their limited understanding of physical realism and deficiency in temporal modeling. Existing solutions are either data-driven or require extra model inputs, but cannot be generalizable to out-of-distribution domains. In this paper, we present PhyT2V, a new data-independent T2V technique that expands the current T2V model's capability of video generation to out-of-distribution domains, by enabling chain-of-thought and step-back reasoning in T2V prompting. Our experiments show that PhyT2V improves existing T2V models' adherence to real-world physical rules by 2.3x, and achieves 35% improvement compared to T2V prompt enhancers. The source codes are available at: https://github.com/pittisl/PhyT2V.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-Video Generation | VideoPhy | PC Score0.42 | 20 | |
| Text-to-Video Generation | PhyGenBench 1.0 (test) | PC0.42 | 16 | |
| Text-to-Video Generation | VideoPhy2 (test) | Hard Score0.0389 | 8 | |
| Text-to-Video Generation | PhyGenBench short unextended prompts | Mechanics Score45 | 8 | |
| Text-to-Video Generation | VideoPhy2 and PhyGenBench | Preference Score (%)88.5 | 7 | |
| Video Generation | PhyGenBench | -- | 4 |