Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MUFASA: Multi-View Fusion and Adaptation Network with Spatial Awareness for Radar Object Detection

About

In recent years, approaches based on radar object detection have made significant progress in autonomous driving systems due to their robustness under adverse weather compared to LiDAR. However, the sparsity of radar point clouds poses challenges in achieving precise object detection, highlighting the importance of effective and comprehensive feature extraction technologies. To address this challenge, this paper introduces a comprehensive feature extraction method for radar point clouds. This study first enhances the capability of detection networks by using a plug-and-play module, GeoSPA. It leverages the Lalonde features to explore local geometric patterns. Additionally, a distributed multi-view attention mechanism, DEMVA, is designed to integrate the shared information across the entire dataset with the global information of each individual frame. By employing the two modules, we present our method, MUFASA, which enhances object detection performance through improved feature extraction. The approach is evaluated on the VoD and TJ4DRaDSet datasets to demonstrate its effectiveness. In particular, we achieve state-of-the-art results among radar-based methods on the VoD dataset with the mAP of 50.24%.

Xiangyuan Peng, Miao Tang, Huawei Sun, Kay Bierzynski, Lorenzo Servadei, Robert Wille• 2024

Related benchmarks

TaskDatasetResultRank
3D Object DetectionView-of-Delft (VoD) Entire Annotated Area (val)
mAP3D50.24
86
3D Object DetectionView-of-Delft (VoD) In Driving Corridor (val)
AP3D (Car)72.5
52
3D Object DetectionTJ4DRadSet (test)
mAP3D28.87
44
BEV Object DetectionTJ4DRadSet (test)
BEV mAP36.19
21
3D Object DetectionVoD 5-scans (val)
AP (Car)43.1
12
3D Object DetectionTJ4D single-scan (test)
mAP3D30.23
11
Showing 6 of 6 rows

Other info

Follow for update