Predicting Semantic Map Representations from Images using Pyramid Occupancy Networks
About
Autonomous vehicles commonly rely on highly detailed birds-eye-view maps of their environment, which capture both static elements of the scene such as road layout as well as dynamic elements such as other cars and pedestrians. Generating these map representations on the fly is a complex multi-stage process which incorporates many important vision-based elements, including ground plane estimation, road segmentation and 3D object detection. In this work we present a simple, unified approach for estimating maps directly from monocular images using a single end-to-end deep learning architecture. For the maps themselves we adopt a semantic Bayesian occupancy grid framework, allowing us to trivially accumulate information over multiple cameras and timesteps. We demonstrate the effectiveness of our approach by evaluating against several challenging baselines on the NuScenes and Argoverse datasets, and show that we are able to achieve a relative improvement of 9.1% and 22.3% respectively compared to the best-performing existing method.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Semantic segmentation | nuScenes (val) | -- | 212 | |
| LiDAR Semantic Segmentation | nuScenes official (test) | mIoU24.7 | 132 | |
| BEV Semantic Segmentation | nuScenes (val) | Drivable Area IoU60.4 | 28 | |
| BeV Segmentation | nuScenes v1.0 (val) | Drivable Area63.05 | 25 | |
| BeV Segmentation | nuScenes (val) | Vehicle Segmentation Score27.9 | 16 | |
| Map-view Semantic Segmentation | Argoverse (val) | Vehicle IoU31.4 | 9 | |
| Map Segmentation | nuScenes | Drivable Area60.4 | 8 | |
| Vehicle map-view segmentation | nuScenes | mIoU24.7 | 8 | |
| Vehicle Segmentation | nuScenes Setting 1: 100m x 50m at 25cm resolution v1.0-trainval (val) | mIoU24.7 | 7 | |
| Object Detection | nuScenes v1.0 (val) | -- | 7 |