PETR: Position Embedding Transformation for Multi-View 3D Object Detection
About
In this paper, we develop position embedding transformation (PETR) for multi-view 3D object detection. PETR encodes the position information of 3D coordinates into image features, producing the 3D position-aware features. Object query can perceive the 3D position-aware features and perform end-to-end object detection. PETR achieves state-of-the-art performance (50.4% NDS and 44.1% mAP) on standard nuScenes dataset and ranks 1st place on the benchmark. It can serve as a simple yet strong baseline for future research. Code is available at \url{https://github.com/megvii-research/PETR}.
Yingfei Liu, Tiancai Wang, Xiangyu Zhang, Jian Sun• 2022
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| 3D Object Detection | nuScenes (val) | NDS49.6 | 981 | |
| 3D Object Detection | nuScenes (test) | mAP44.5 | 874 | |
| 3D Object Detection | NuScenes v1.0 (test) | mAP44.5 | 210 | |
| 3D Object Detection | nuScenes v1.0 (val) | mAP (Overall)40.3 | 207 | |
| 3D Object Detection | nuScenes (val) | mAP37 | 128 | |
| 3D Object Detection | Argoverse 2 (val) | mAP17.6 | 76 | |
| 3D Object Detection | Waymo Open Dataset LEVEL_1 (val) | 3D AP20.9 | 60 | |
| Object Detection | nuScenes (val) | mAP37 | 48 | |
| 3D Object Detection | Waymo (val) | -- | 38 | |
| 3D Object Detection | nuScenes Night (val) | mAP17.2 | 26 |
Showing 10 of 23 rows