Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

World-consistent Video Diffusion with Explicit 3D Modeling

About

Recent advancements in diffusion models have set new benchmarks in image and video generation, enabling realistic visual synthesis across single- and multi-frame contexts. However, these models still struggle with efficiently and explicitly generating 3D-consistent content. To address this, we propose World-consistent Video Diffusion (WVD), a novel framework that incorporates explicit 3D supervision using XYZ images, which encode global 3D coordinates for each image pixel. More specifically, we train a diffusion transformer to learn the joint distribution of RGB and XYZ frames. This approach supports multi-task adaptability via a flexible inpainting strategy. For example, WVD can estimate XYZ frames from ground-truth RGB or generate novel RGB frames using XYZ projections along a specified camera trajectory. In doing so, WVD unifies tasks like single-image-to-3D generation, multi-view stereo, and camera-controlled video generation. Our approach demonstrates competitive performance across multiple benchmarks, providing a scalable solution for 3D-consistent video and image generation with a single pretrained model.

Qihang Zhang, Shuangfei Zhai, Miguel Angel Bautista, Kevin Miao, Alexander Toshev, Joshua Susskind, Jiatao Gu• 2024

Related benchmarks

TaskDatasetResultRank
Monocular Depth EstimationNYU V2--
113
Monocular Depth EstimationBONN
Relative Error (Rel)7
14
Video Depth EstimationScanNet++
Absolute Relative Error5
10
Single-image to 3DRealEstate10K, ScanNet, MVImgNet, CO3D, and Habitat (test)
FID15.8
4
Showing 4 of 4 rows

Other info

Code

Follow for update