PhyT2V: LLM-Guided Iterative Self-Refinement for Physics-Grounded Text-to-Video Generation
About
Text-to-video (T2V) generation has been recently enabled by transformer-based diffusion models, but current T2V models lack capabilities in adhering to the real-world common knowledge and physical rules, due to their limited understanding of physical realism and deficiency in temporal modeling. Existing solutions are either data-driven or require extra model inputs, but cannot be generalizable to out-of-distribution domains. In this paper, we present PhyT2V, a new data-independent T2V technique that expands the current T2V model's capability of video generation to out-of-distribution domains, by enabling chain-of-thought and step-back reasoning in T2V prompting. Our experiments show that PhyT2V improves existing T2V models' adherence to real-world physical rules by 2.3x, and achieves 35% improvement compared to T2V prompt enhancers. The source codes are available at: https://github.com/pittisl/PhyT2V.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Video Generation | VideoPhy | SA (%)61 | 50 | |
| Text-to-Video Generation | VideoPhy | PC Score37 | 41 | |
| Text-to-Video Generation | PhyGenBench 1.0 (test) | PC0.42 | 16 | |
| Physical Plausibility Evaluation | VideoPhy | Average PC37 | 16 | |
| Video Generation | PhyGenBench | PCA Score0.42 | 13 | |
| Text-to-Video Generation | PhyGenBench | Mec Score20 | 12 | |
| Video Generation | VideoPhy Fluid-Fluid | SA and PC Score55.4 | 11 | |
| Prompt Enhancement for Text-to-Video Generation | CogVideoX-5B (test) | SA50.6 | 11 | |
| Video Generation | VideoPhy Overall | SA and PC Score40.1 | 11 | |
| Video Generation | VideoPhy Solid-Solid | SA and PC Score25.4 | 11 |