Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Predicting Semantic Map Representations from Images using Pyramid Occupancy Networks

About

Autonomous vehicles commonly rely on highly detailed birds-eye-view maps of their environment, which capture both static elements of the scene such as road layout as well as dynamic elements such as other cars and pedestrians. Generating these map representations on the fly is a complex multi-stage process which incorporates many important vision-based elements, including ground plane estimation, road segmentation and 3D object detection. In this work we present a simple, unified approach for estimating maps directly from monocular images using a single end-to-end deep learning architecture. For the maps themselves we adopt a semantic Bayesian occupancy grid framework, allowing us to trivially accumulate information over multiple cameras and timesteps. We demonstrate the effectiveness of our approach by evaluating against several challenging baselines on the NuScenes and Argoverse datasets, and show that we are able to achieve a relative improvement of 9.1% and 22.3% respectively compared to the best-performing existing method.

Thomas Roddick, Roberto Cipolla• 2020

Related benchmarks

TaskDatasetResultRank
Semantic segmentationnuScenes (val)--
212
LiDAR Semantic SegmentationnuScenes official (test)
mIoU24.7
132
BEV Semantic SegmentationnuScenes (val)
Drivable Area IoU60.4
28
BeV SegmentationnuScenes v1.0 (val)
Drivable Area63.05
25
BeV SegmentationnuScenes (val)
Vehicle Segmentation Score27.9
16
Map-view Semantic SegmentationArgoverse (val)
Vehicle IoU31.4
9
Map SegmentationnuScenes
Drivable Area60.4
8
Vehicle map-view segmentationnuScenes
mIoU24.7
8
Vehicle SegmentationnuScenes Setting 1: 100m x 50m at 25cm resolution v1.0-trainval (val)
mIoU24.7
7
Object DetectionnuScenes v1.0 (val)--
7
Showing 10 of 11 rows

Other info

Follow for update