Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Stable Virtual Camera: Generative View Synthesis with Diffusion Models

About

We present Stable Virtual Camera (Seva), a generalist diffusion model that creates novel views of a scene, given any number of input views and target cameras. Existing works struggle to generate either large viewpoint changes or temporally smooth samples, while relying on specific task configurations. Our approach overcomes these limitations through simple model design, optimized training recipe, and flexible sampling strategy that generalize across view synthesis tasks at test time. As a result, our samples maintain high consistency without requiring additional 3D representation-based distillation, thus streamlining view synthesis in the wild. Furthermore, we show that our method can generate high-quality videos lasting up to half a minute with seamless loop closure. Extensive benchmarking demonstrates that Seva outperforms existing methods across different datasets and settings. Project page with code and model: https://stable-virtual-camera.github.io/.

Jensen Zhou, Hang Gao, Vikram Voleti, Aaryaman Vasishta, Chun-Han Yao, Mark Boss, Philip Torr, Christian Rupprecht, Varun Jampani• 2025

Related benchmarks

TaskDatasetResultRank
Novel View SynthesisTanks&Temples (test)--
239
Novel View SynthesisLLFF (test)
PSNR15.6
79
3D Scene GenerationWorldScore
Camera Control0.558
33
Novel View SynthesisRealEstate-10K 2-view
PSNR25.66
28
Novel View SynthesisScanNet++
PSNR11.71
24
Scene-level View SynthesisRealEstate10k (val)
PSNR25.66
15
View SynthesisTanks&Temples
PSNR11.76
15
Novel View SynthesisScanNet++ (test)
LPIPS0.596
15
Novel View SynthesisRealEstate-10K 3-view
PSNR27.57
14
Single-view Novel View SynthesisRealEstate10K Short-term, 50th frame 84 (test)
PSNR18.67
13
Showing 10 of 46 rows

Other info

Follow for update