Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Multi-Modal Decouple and Recouple Network for Robust 3D Object Detection

About

Multi-modal 3D object detection with bird's eye view (BEV) has achieved desired advances on benchmarks. Nonetheless, the accuracy may drop significantly in the real world due to data corruption such as sensor configurations for LiDAR and scene conditions for camera. One design bottleneck of previous models resides in the tightly coupling of multi-modal BEV features during fusion, which may degrade the overall system performance if one modality or both is corrupted. To mitigate, we propose a Multi-Modal Decouple and Recouple Network for robust 3D object detection under data corruption. Different modalities commonly share some high-level invariant features. We observe that these invariant features across modalities do not always fail simultaneously, because different types of data corruption affect each modality in distinct ways.These invariant features can be recovered across modalities for robust fusion under data corruption.To this end, we explicitly decouple Camera/LiDAR BEV features into modality-invariant and modality-specific parts. It allows invariant features to compensate each other while mitigates the negative impact of a corrupted modality on the other.We then recouple these features into three experts to handle different types of data corruption, respectively, i.e., LiDAR, camera, and both.For each expert, we use modality-invariant features as robust information, while modality-specific features serve as a complement.Finally, we adaptively fuse the three experts to exact robust features for 3D object detection. For validation, we collect a benchmark with a large quantity of data corruption for LiDAR, camera, and both based on nuScenes. Our model is trained on clean nuScenes and tested on all types of data corruption. Our model consistently achieves the best accuracy on both corrupted and clean data compared to recent models.

Rui Ding, Zhaonian Kuang, Yuzhe Ji, Meng Yang, Xinhu Zheng, Gang Hua• 2026

Related benchmarks

TaskDatasetResultRank
3D Object DetectionnuScenes (val)
NDS72.5
981
3D Object DetectionnuScenes (test)
mAP70.5
874
3D Object DetectionnuScenes--
11
3D Object DetectionnuScenes Clean v1.0 (val)
mAP69.5
10
3D Object DetectionnuScenes camera corruptions v1.0 (val)
Brightness NDS71.4
8
3D Object DetectionnuScenes (val)
Clean NDS72
7
3D Object DetectionnuScenes LiDAR Scene Corruptions v1.0 (test)
NDS (Beam Missing)71.3
7
3D Object DetectionnuScenes FOV 180° v1.0 (val)
NDS59.5
5
3D Object DetectionnuScenes FOV 120° v1.0 (val)
NDS55.3
5
Showing 9 of 9 rows

Other info

Follow for update