Diff4Splat: Controllable 4D Scene Generation with Latent Dynamic Reconstruction Models
About
We introduce Diff4Splat, a feed-forward method that synthesizes controllable and explicit 4D scenes from a single image. Our approach unifies the generative priors of video diffusion models with geometry and motion constraints learned from large-scale 4D datasets. Given a single input image, a camera trajectory, and an optional text prompt, Diff4Splat directly predicts a deformable 3D Gaussian field that encodes appearance, geometry, and motion, all in a single forward pass, without test-time optimization or post-hoc refinement. At the core of our framework lies a video latent transformer, which augments video diffusion models to jointly capture spatio-temporal dependencies and predict time-varying 3D Gaussian primitives. Training is guided by objectives on appearance fidelity, geometric accuracy, and motion consistency, enabling Diff4Splat to synthesize high-quality 4D scenes in 30 seconds. We demonstrate the effectiveness of Diff4Splat across video generation, novel view synthesis, and geometry extraction, where it matches or surpasses optimization-based methods for dynamic scene synthesis while being significantly more efficient.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Camera-controlled Video Generation | Camera-Controlled Video Generation Benchmark | Average Matches Score5.11e+3 | 6 | |
| Camera-controlled Video Generation | Custom Evaluation Set | FVD210.2 | 6 | |
| Camera-controlled Video Generation | Camera Pose Fidelity Evaluation Set | Avg RPE (Translation)0.012 | 2 |