Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TrackTeller: Temporal Multimodal 3D Grounding for Behavior-Dependent Object References

About

Understanding natural-language references to objects in dynamic 3D driving scenes is essential for interactive autonomous systems. In practice, many referring expressions describe targets through recent motion or short-term interactions, which cannot be resolved from static appearance or geometry alone. We study temporal language-based 3D grounding, where the objective is to identify the referred object in the current frame by leveraging multi-frame observations. We propose TrackTeller, a temporal multimodal grounding framework that integrates LiDAR-image fusion, language-conditioned decoding, and temporal reasoning in a unified architecture. TrackTeller constructs a shared UniScene representation aligned with textual semantics, generates language-aware 3D proposals, and refines grounding decisions using motion history and short-term dynamics. Experiments on the NuPrompt benchmark demonstrate that TrackTeller consistently improves language-grounded tracking performance, outperforming strong baselines with a 70% relative improvement in Average Multi-Object Tracking Accuracy and a 3.15-3.4 times reduction in False Alarm Frequency.

Jiahong Yu, Ziqi Wang, Hailiang Zhao, Wei Zhai, Xueqiang Yan, Shuiguang Deng• 2025

Related benchmarks

TaskDatasetResultRank
Prompt-guided Multi-Object TrackingNuPrompt
AMOTA (tau=0)594
9
3D Multi-Object TrackingnuScenes v1.0 (val)
Parameters (M)1.63e+3
9
Showing 2 of 2 rows

Other info

Follow for update