Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Generative Camera Dolly: Extreme Monocular Dynamic Novel View Synthesis

About

Accurate reconstruction of complex dynamic scenes from just a single viewpoint continues to be a challenging task in computer vision. Current dynamic novel view synthesis methods typically require videos from many different camera viewpoints, necessitating careful recording setups, and significantly restricting their utility in the wild as well as in terms of embodied AI applications. In this paper, we propose $\textbf{GCD}$, a controllable monocular dynamic view synthesis pipeline that leverages large-scale diffusion priors to, given a video of any scene, generate a synchronous video from any other chosen perspective, conditioned on a set of relative camera pose parameters. Our model does not require depth as input, and does not explicitly model 3D scene geometry, instead performing end-to-end video-to-video translation in order to achieve its goal efficiently. Despite being trained on synthetic multi-view video data only, zero-shot real-world generalization experiments show promising results in multiple domains, including robotics, object permanence, and driving environments. We believe our framework can potentially unlock powerful applications in rich dynamic scene understanding, perception for robotics, and interactive 3D video viewing experiences for virtual reality.

Basile Van Hoorick, Rundi Wu, Ege Ozguroglu, Kyle Sargent, Ruoshi Liu, Pavel Tokmakov, Achal Dave, Changxi Zheng, Carl Vondrick• 2024

Related benchmarks

TaskDatasetResultRank
Text-to-Video GenerationVBench--
111
Dynamic View SynthesisKubric-4D (evaluation)
PSNR (all)20.3
7
Narrow Dynamic View SynthesisDyCheck iPhone 1.0 (test)
PSNR11.43
7
Narrow Dynamic View SynthesisKubric-4D gradual 1.0 (test)
PSNR20.42
7
Camera controlUltraVideo (test)
DINO0.0691
7
Narrow Dynamic View SynthesisParDom-4D gradual 1.0 (test)
PSNR24.75
6
Showing 6 of 6 rows

Other info

Follow for update