Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

HM-ViT: Hetero-modal Vehicle-to-Vehicle Cooperative perception with vision transformer

About

Vehicle-to-Vehicle technologies have enabled autonomous vehicles to share information to see through occlusions, greatly enhancing perception performance. Nevertheless, existing works all focused on homogeneous traffic where vehicles are equipped with the same type of sensors, which significantly hampers the scale of collaboration and benefit of cross-modality interactions. In this paper, we investigate the multi-agent hetero-modal cooperative perception problem where agents may have distinct sensor modalities. We present HM-ViT, the first unified multi-agent hetero-modal cooperative perception framework that can collaboratively predict 3D objects for highly dynamic vehicle-to-vehicle (V2V) collaborations with varying numbers and types of agents. To effectively fuse features from multi-view images and LiDAR point clouds, we design a novel heterogeneous 3D graph transformer to jointly reason inter-agent and intra-agent interactions. The extensive experiments on the V2V perception dataset OPV2V demonstrate that the HM-ViT outperforms SOTA cooperative perception methods for V2V hetero-modal cooperative perception. We will release codes to facilitate future research.

Hao Xiang, Runsheng Xu, Jiaqi Ma• 2023

Related benchmarks

TaskDatasetResultRank
3D Object DetectionOPV2V
AP@0.5095
146
3D Object DetectionDAIR-V2X
AP@0.5076.1
117
3D Object DetectionV2XSet
AP@0.5080.3
70
Collaborative PerceptionOPV2V (test)
AP@5085.3
32
Collaborative PerceptionV2XSet (test)
AP@5082.4
32
3D Multi-Object TrackingRCooper
AMOTA22.4
7
3D Object DetectionRCooper
AP@50 (A1)45.8
7
Object DetectionV2XSet
Performance Score 180.9
7
Showing 8 of 8 rows

Other info

Follow for update