Stepper: Stepwise Immersive Scene Generation with Multiview Panoramas
About
The synthesis of immersive 3D scenes from text is rapidly maturing, driven by novel video generative models and feed-forward 3D reconstruction, with vast potential in AR/VR and world modeling. While panoramic images have proven effective for scene initialization, existing approaches suffer from a trade-off between visual fidelity and explorability: autoregressive expansion suffers from context drift, while panoramic video generation is limited to low resolution. We present Stepper, a unified framework for text-driven immersive 3D scene synthesis that circumvents these limitations via stepwise panoramic scene expansion. Stepper leverages a novel multi-view 360{\deg} diffusion model that enables consistent, high-resolution expansion, coupled with a geometry reconstruction pipeline that enforces geometric coherence. Trained on a new large-scale, multi-view panorama dataset, Stepper achieves state-of-the-art fidelity and structural consistency, outperforming prior approaches, thereby setting a new standard for immersive scene generation.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Novel View Synthesis | Blender | PSNR21.995 | 64 | |
| Immersive Scene Generation | User Study n=10 participants (15 video comparisons) | Visual Appeal Score88 | 4 | |
| Novel View Synthesis | Infinigen Indoors | PSNR21.775 | 4 | |
| Novel View Synthesis | Infinigen Outdoors | PSNR20.507 | 4 | |
| Novel View Synthesis | Infinigen & Blender Average | PSNR21.426 | 4 |