Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Cross-view Transformers for real-time Map-view Semantic Segmentation

About

We present cross-view transformers, an efficient attention-based model for map-view semantic segmentation from multiple cameras. Our architecture implicitly learns a mapping from individual camera views into a canonical map-view representation using a camera-aware cross-view attention mechanism. Each camera uses positional embeddings that depend on its intrinsic and extrinsic calibration. These embeddings allow a transformer to learn the mapping across different views without ever explicitly modeling it geometrically. The architecture consists of a convolutional image encoder for each view and cross-view transformer layers to infer a map-view semantic segmentation. Our model is simple, easily parallelizable, and runs in real-time. The presented architecture performs at state-of-the-art on the nuScenes dataset, with 4x faster inference speeds. Code is available at https://github.com/bradyz/cross_view_transformers.

Brady Zhou, Philipp Kr\"ahenb\"uhl• 2022

Related benchmarks

TaskDatasetResultRank
Semantic segmentationnuScenes (val)--
212
BEV Semantic SegmentationnuScenes (val)
Drivable Area IoU74.3
28
BEV segmentation (Vehicle)nuScenes v1.0-trainval (val)
Vehicle BEV IoU37.7
28
BeV SegmentationnuScenes v1.0 (val)
Drivable Area74.3
25
Map SegmentationnuScenes (val)
IoU (Drive)74.3
23
BeV SegmentationnuScenes (val)
Vehicle Segmentation Score36
16
Vehicle SegmentationnuScenes (val)
mIoU36
14
BeV vehicle segmentationnuScenes (val)
IoU (No Filter, 224x480)31.4
11
Map-view Semantic SegmentationArgoverse (val)
Vehicle IoU35.2
9
Vehicle map-view segmentationnuScenes
mIoU37.5
8
Showing 10 of 15 rows

Other info

Code

Follow for update