Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Structure and Content-Guided Video Synthesis with Diffusion Models

About

Text-guided generative diffusion models unlock powerful image creation and editing tools. While these have been extended to video generation, current approaches that edit the content of existing footage while retaining structure require expensive re-training for every input or rely on error-prone propagation of image edits across frames. In this work, we present a structure and content-guided video diffusion model that edits videos based on visual or textual descriptions of the desired output. Conflicts between user-provided content edits and structure representations occur due to insufficient disentanglement between the two aspects. As a solution, we show that training on monocular depth estimates with varying levels of detail provides control over structure and content fidelity. Our model is trained jointly on images and videos which also exposes explicit control of temporal consistency through a novel guidance method. Our experiments demonstrate a wide variety of successes; fine-grained control over output characteristics, customization based on a few reference images, and a strong user preference towards results by our model.

Patrick Esser, Johnathan Chiu, Parmida Atighehchian, Jonathan Granskog, Anastasis Germanidis• 2023

Related benchmarks

TaskDatasetResultRank
Text-to-Video GenerationVBench
Quality Score82.47
111
Text-to-Video GenerationVBench 2024 (test)
Total Score80.58
15
Motion CustomizationTGVE 76 videos (full)
Text Alignment28.54
12
4D-conditioned Animation GenerationProposed 4D-conditioned animation generation evaluation set
Frame Consistency0.9907
5
Text-guided Video Editing24 videos (full)
Text Alignment (CLIP)0.78
5
Text-driven perpetual scene generationRealEstate10K indoor videos vs GEN-1 comparison scale filtered subset (110 videos)
Rotation Error (deg)2.47
2
Showing 6 of 6 rows

Other info

Follow for update