Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

DLWM: Dual Latent World Models enable Holistic Gaussian-centric Pre-training in Autonomous Driving

About

Vision-based autonomous driving has gained much attention due to its low costs and excellent performance. Compared with dense BEV (Bird's Eye View) or sparse query models, Gaussian-centric method is a comprehensive yet sparse representation by describing scene with 3D semantic Gaussians. In this paper, we introduce DLWM, a novel paradigm with Dual Latent World Models specifically designed to enable holistic gaussian-centric pre-training in autonomous driving using two stages. In the first stage, DLWM predicts 3D Gaussians from queries by self-supervised reconstructing multi-view semantic and depth images. Equipped with fine-grained contextual features, in the second stage, two latent world models are trained separately for temporal feature learning, including Gaussian-flow-guided latent prediction for downstream occupancy perception and forecasting tasks, and ego-planning-guided latent prediction for motion planning. Extensive experiments in SurroundOcc and nuScenes benchmarks demonstrate that DLWM shows significant performance gains across Gaussian-centric 3D occupancy perception, 4D occupancy forecasting and motion planning tasks.

Yiyao Zhu, Ying Xue, Haiming Zhang, Guangfeng Jiang, Wending Zhou, Xu Yan, Jiantao Gao, Yingjie Cai, Bingbing Liu, Zhen Li, Shaojie Shen• 2026

Related benchmarks

TaskDatasetResultRank
3D Semantic Occupancy PredictionSurroundOcc-nuScenes (val)
mIoU21.85
59
4D occupancy forecastingSurroundOcc-nuScenes (val)
mIoU (1s)19.66
8
Showing 2 of 2 rows

Other info

Follow for update