Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

OccLLaMA: An Occupancy-Language-Action Generative World Model for Autonomous Driving

About

The rise of multi-modal large language models(MLLMs) has spurred their applications in autonomous driving. Recent MLLM-based methods perform action by learning a direct mapping from perception to action, neglecting the dynamics of the world and the relations between action and world dynamics. In contrast, human beings possess world model that enables them to simulate the future states based on 3D internal visual representation and plan actions accordingly. To this end, we propose OccLLaMA, an occupancy-language-action generative world model, which uses semantic occupancy as a general visual representation and unifies vision-language-action(VLA) modalities through an autoregressive model. Specifically, we introduce a novel VQVAE-like scene tokenizer to efficiently discretize and reconstruct semantic occupancy scenes, considering its sparsity and classes imbalance. Then, we build a unified multi-modal vocabulary for vision, language and action. Furthermore, we enhance LLM, specifically LLaMA, to perform the next token/scene prediction on the unified vocabulary to complete multiple tasks in autonomous driving. Extensive experiments demonstrate that OccLLaMA achieves competitive performance across multiple tasks, including 4D occupancy forecasting, motion planning, and visual question answering, showcasing its potential as a foundation model in autonomous driving.

Julong Wei, Shanshuai Yuan, Pengfei Li, Qingda Hu, Zhongxue Gan, Wenchao Ding• 2024

Related benchmarks

TaskDatasetResultRank
4D occupancy forecastingOcc3D-nuScenes
Semantic mIoU (1s)25.05
25
3D Question AnsweringNuscenesQA v1.0 (test)
Existence Accuracy (All)79.9
19
4D occupancy forecastingnuScenes
mIoU (1s Horizon)25.05
10
Occupancy ForecastingOcc3D-nuScenes v1.0 (test)
mIoU (1s)10.34
7
Showing 4 of 4 rows

Other info

Follow for update