TI2V-Zero: Zero-Shot Image Conditioning for Text-to-Video Diffusion Models
About
Text-conditioned image-to-video generation (TI2V) aims to synthesize a realistic video starting from a given image (e.g., a woman's photo) and a text description (e.g., "a woman is drinking water."). Existing TI2V frameworks often require costly training on video-text datasets and specific model designs for text and image conditioning. In this paper, we propose TI2V-Zero, a zero-shot, tuning-free method that empowers a pretrained text-to-video (T2V) diffusion model to be conditioned on a provided image, enabling TI2V generation without any optimization, fine-tuning, or introducing external modules. Our approach leverages a pretrained T2V diffusion foundation model as the generative prior. To guide video generation with the additional image input, we propose a "repeat-and-slide" strategy that modulates the reverse denoising process, allowing the frozen diffusion model to synthesize a video frame-by-frame starting from the provided image. To ensure temporal continuity, we employ a DDPM inversion strategy to initialize Gaussian noise for each newly synthesized frame and a resampling technique to help preserve visual details. We conduct comprehensive experiments on both domain-specific and open-domain datasets, where TI2V-Zero consistently outperforms a recent open-domain TI2V model. Furthermore, we show that TI2V-Zero can seamlessly extend to other tasks such as video infilling and prediction when provided with more images. Its autoregressive design also supports long video generation.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Animation | Synthetic Dataset (test) | CLIP-T24.87 | 3 | |
| Image Animation | UCF-101 | User Preference Score28.4 | 3 | |
| Video Generation | CK+ (test) | FVD81.72 | 3 | |
| Outpainting | UCF-101 | User Preference Score38.6 | 2 | |
| Rewinding | UCF-101 | User Preference39.8 | 2 |