Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

4DNeX: Feed-Forward 4D Generative Modeling Made Easy

About

We present 4DNeX, the first feed-forward framework for generating 4D (i.e., dynamic 3D) scene representations from a single image. In contrast to existing methods that rely on computationally intensive optimization or require multi-frame video inputs, 4DNeX enables efficient, end-to-end image-to-4D generation by fine-tuning a pretrained video diffusion model. Specifically, 1) to alleviate the scarcity of 4D data, we construct 4DNeX-10M, a large-scale dataset with high-quality 4D annotations generated using advanced reconstruction approaches. 2) we introduce a unified 6D video representation that jointly models RGB and XYZ sequences, facilitating structured learning of both appearance and geometry. 3) we propose a set of simple yet effective adaptation strategies to repurpose pretrained video diffusion models for 4D modeling. 4DNeX produces high-quality dynamic point clouds that enable novel-view video synthesis. Extensive experiments demonstrate that 4DNeX outperforms existing 4D generation methods in efficiency and generalizability, offering a scalable solution for image-to-4D modeling and laying the foundation for generative 4D world models that simulate dynamic scene evolution.

Zhaoxi Chen, Tianqi Liu, Long Zhuo, Jiawei Ren, Zeng Tao, He Zhu, Fangzhou Hong, Liang Pan, Ziwei Liu• 2025

Related benchmarks

TaskDatasetResultRank
Camera Trajectory EstimationSpatialVid
Trajectory Length Error0.034
5
Camera pose estimationSpatialVid
ATE0.006
5
Depth EstimationSpatialVid
Log RMSE0.479
5
Image-to-Video GenerationSpatialVid General motion (val)
DD Score0.03
5
Image-to-Video GenerationSpatialVid Complex motion (val)
d.d.0.19
5
Showing 5 of 5 rows

Other info

Follow for update