Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

End-to-End 3D Spatiotemporal Perception with Multimodal Fusion and V2X Collaboration

About

Multi-view cooperative perception and multimodal fusion are essential for reliable 3D spatiotemporal understanding in autonomous driving, especially under occlusions, limited viewpoints, and communication delays in V2X scenarios. This paper proposes XET-V2X, a multi-modal fused end-to-end tracking framework for v2x collaboration that unifies multi-view multimodal sensing within a shared spatiotemporal representation. To efficiently align heterogeneous viewpoints and modalities, XET-V2X introduces a dual-layer spatial cross-attention module based on multi-scale deformable attention. Multi-view image features are first aggregated to enhance semantic consistency, followed by point cloud fusion guided by the updated spatial queries, enabling effective cross-modal interaction while reducing computational overhead. Experiments on the real-world V2X-Seq-SPD dataset and the simulated V2X-Sim-V2V and V2X-Sim-V2I benchmarks demonstrate consistent improvements in detection and tracking performance under varying communication delays. Both quantitative results and qualitative visualizations indicate that XET-V2X achieves robust and temporally stable perception in complex traffic scenarios.

Zhenwei Yang, Yibo Ai, Weidong Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Cooperative 3D Object TrackingV2X-Seq SPD
mAP79.5
12
Cooperative 3D Object TrackingV2X-Sim V2V
mAP76.6
12
Cooperative 3D Object TrackingV2X-Sim V2I
mAP85.8
12
Showing 3 of 3 rows

Other info

Follow for update