Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Translating Images into Maps

About

We approach instantaneous mapping, converting images to a top-down view of the world, as a translation problem. We show how a novel form of transformer network can be used to map from images and video directly to an overhead map or bird's-eye-view (BEV) of the world, in a single end-to-end network. We assume a 1-1 correspondence between a vertical scanline in the image, and rays passing through the camera location in an overhead map. This lets us formulate map generation from an image as a set of sequence-to-sequence translations. Posing the problem as translation allows the network to use the context of the image when interpreting the role of each pixel. This constrained formulation, based upon a strong physical grounding of the problem, leads to a restricted transformer network that is convolutional in the horizontal direction only. The structure allows us to make efficient use of data when training, and obtains state-of-the-art results for instantaneous mapping of three large-scale datasets, including a 15% and 30% relative gain against existing best performing methods on the nuScenes and Argoverse datasets, respectively. We make our code available on https://github.com/avishkarsaha/translating-images-into-maps.

Avishkar Saha, Oscar Mendez Maldonado, Chris Russell, Richard Bowden• 2021

Related benchmarks

TaskDatasetResultRank
Semantic segmentationnuScenes (val)--
212
BeV SegmentationnuScenes v1.0 (val)--
25
Vehicle SegmentationnuScenes (val)
mIoU41.3
14
BeV SegmentationKITTI-360 (val)
BEV Seg IoU Large20.46
7
Showing 4 of 4 rows

Other info

Follow for update