Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

PhyT2V: LLM-Guided Iterative Self-Refinement for Physics-Grounded Text-to-Video Generation

About

Text-to-video (T2V) generation has been recently enabled by transformer-based diffusion models, but current T2V models lack capabilities in adhering to the real-world common knowledge and physical rules, due to their limited understanding of physical realism and deficiency in temporal modeling. Existing solutions are either data-driven or require extra model inputs, but cannot be generalizable to out-of-distribution domains. In this paper, we present PhyT2V, a new data-independent T2V technique that expands the current T2V model's capability of video generation to out-of-distribution domains, by enabling chain-of-thought and step-back reasoning in T2V prompting. Our experiments show that PhyT2V improves existing T2V models' adherence to real-world physical rules by 2.3x, and achieves 35% improvement compared to T2V prompt enhancers. The source codes are available at: https://github.com/pittisl/PhyT2V.

Qiyao Xue, Xiangyu Yin, Boyuan Yang, Wei Gao• 2024

Related benchmarks

TaskDatasetResultRank
Video GenerationVideoPhy
SA (%)61
50
Text-to-Video GenerationVideoPhy
PC Score37
41
Text-to-Video GenerationPhyGenBench 1.0 (test)
PC0.42
16
Physical Plausibility EvaluationVideoPhy
Average PC37
16
Video GenerationPhyGenBench
PCA Score0.42
13
Text-to-Video GenerationPhyGenBench
Mec Score20
12
Video GenerationVideoPhy Fluid-Fluid
SA and PC Score55.4
11
Prompt Enhancement for Text-to-Video GenerationCogVideoX-5B (test)
SA50.6
11
Video GenerationVideoPhy Overall
SA and PC Score40.1
11
Video GenerationVideoPhy Solid-Solid
SA and PC Score25.4
11
Showing 10 of 29 rows

Other info

Code

Follow for update