Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

WorldVLM: Combining World Model Forecasting and Vision-Language Reasoning

About

Autonomous driving systems depend on on models that can reason about high-level scene contexts and accurately predict the dynamics of their surrounding environment. Vision- Language Models (VLMs) have recently emerged as promising tools for decision-making and scene understanding, offering strong capabilities in contextual reasoning. However, their limited spatial comprehension constrains their effectiveness as end-to-end driving models. World Models (WM) internalize environmental dynamics to predict future scene evolution. Recently explored as ego-motion predictors and foundation models for autonomous driving, they represent a promising direction for addressing key challenges in the field, particularly enhancing generalization while maintaining dynamic prediction. To leverage the complementary strengths of context-based decision making and prediction, we propose WorldVLM: A hybrid architecture that unifies VLMs and WMs. In our design, the high-level VLM generates behavior commands to guide the driving WM, enabling interpretable and context-aware actions. We evaluate conditioning strategies and provide insights into the hybrid design challenges.

Stefan Englmeier, Katharina Winter, Fabian B. Flohr• 2026

Related benchmarks

TaskDatasetResultRank
Vision-Language ReasoningnuScenes (reasoning)
BERT F1 Score67
4
Trajectory PredictionnuScenes original (val)
L2 Error (1s)0.31
3
Showing 2 of 2 rows

Other info

Follow for update