Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Pix2Video: Video Editing using Image Diffusion

About

Image diffusion models, trained on massive image collections, have emerged as the most versatile image generator model in terms of quality and diversity. They support inverting real images and conditional (e.g., text) generation, making them attractive for high-quality image editing applications. We investigate how to use such pre-trained image models for text-guided video editing. The critical challenge is to achieve the target edits while still preserving the content of the source video. Our method works in two simple steps: first, we use a pre-trained structure-guided (e.g., depth) image diffusion model to perform text-guided edits on an anchor frame; then, in the key step, we progressively propagate the changes to the future frames via self-attention feature injection to adapt the core denoising step of the diffusion model. We then consolidate the changes by adjusting the latent code for the frame before continuing the process. Our approach is training-free and generalizes to a wide range of edits. We demonstrate the effectiveness of the approach by extensive experimentation and compare it against four different prior and parallel efforts (on ArXiv). We demonstrate that realistic text-guided video edits are possible, without any compute-intensive preprocessing or video-specific finetuning.

Duygu Ceylan, Chun-Hao Paul Huang, Niloy J. Mitra• 2023

Related benchmarks

TaskDatasetResultRank
Video EditingVideo Editing Evaluation Set (test)
CLIP Score0.32
7
Zero-shot Text-guided Video EditingCurated dataset 8-frames
CLIP-F89.96
6
4D-conditioned Animation GenerationProposed 4D-conditioned animation generation evaluation set
Frame Consistency0.963
5
Text-guided Video Editing11 videos (test)
Frame Accuracy100
4
Showing 4 of 4 rows

Other info

Follow for update