Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

DualCross: Cross-Modality Cross-Domain Adaptation for Monocular BEV Perception

About

Closing the domain gap between training and deployment and incorporating multiple sensor modalities are two challenging yet critical topics for self-driving. Existing work only focuses on single one of the above topics, overlooking the simultaneous domain and modality shift which pervasively exists in real-world scenarios. A model trained with multi-sensor data collected in Europe may need to run in Asia with a subset of input sensors available. In this work, we propose DualCross, a cross-modality cross-domain adaptation framework to facilitate the learning of a more robust monocular bird's-eye-view (BEV) perception model, which transfers the point cloud knowledge from a LiDAR sensor in one domain during the training phase to the camera-only testing scenario in a different domain. This work results in the first open analysis of cross-domain cross-sensor perception and adaptation for monocular 3D tasks in the wild. We benchmark our approach on large-scale datasets under a wide range of domain shifts and show state-of-the-art results against various baselines.

Yunze Man, Liang-Yan Gui, Yu-Xiong Wang• 2023

Related benchmarks

TaskDatasetResultRank
BEV Semantic SegmentationnuScenes Boston -> Singapore 1.0 (test val)
Drivable Area Score43.8
6
BEV Semantic SegmentationnuScenes Singapore -> Boston 1.0 (test val)
mIoU (Drivable Area)45.7
6
BEV Semantic SegmentationnuScenes Day -> Night 1.0 (test val)
Driver49.4
6
BEV Semantic SegmentationnuScenes Dry -> Rain 1.0 (test val)
Class IoU: Driveable Area67.9
6
Showing 4 of 4 rows

Other info

Follow for update