Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

StreetCrafter: Street View Synthesis with Controllable Video Diffusion Models

About

This paper aims to tackle the problem of photorealistic view synthesis from vehicle sensor data. Recent advancements in neural scene representation have achieved notable success in rendering high-quality autonomous driving scenes, but the performance significantly degrades as the viewpoint deviates from the training trajectory. To mitigate this problem, we introduce StreetCrafter, a novel controllable video diffusion model that utilizes LiDAR point cloud renderings as pixel-level conditions, which fully exploits the generative prior for novel view synthesis, while preserving precise camera control. Moreover, the utilization of pixel-level LiDAR conditions allows us to make accurate pixel-level edits to target scenes. In addition, the generative prior of StreetCrafter can be effectively incorporated into dynamic scene representations to achieve real-time rendering. Experiments on Waymo Open Dataset and PandaSet demonstrate that our model enables flexible control over viewpoint changes, enlarging the view synthesis regions for satisfying rendering, which outperforms existing methods.

Yunzhi Yan, Zhen Xu, Haotong Lin, Haian Jin, Haoyu Guo, Yida Wang, Kun Zhan, Xianpeng Lang, Hujun Bao, Xiaowei Zhou, Sida Peng• 2024

Related benchmarks

TaskDatasetResultRank
Novel View SynthesisWaymo
KID0.157
7
View Extrapolation (Lane Shift)PandaSet (test)
FID @ 2m62.15
6
View InterpolationPandaSet (test)
PSNR26.68
6
Street View SynthesisWaymo Lane Shift
FID @ 2m58.17
5
Street View SynthesisWaymo (Interpolation)
PSNR30.05
5
Showing 5 of 5 rows

Other info

Code

Follow for update