Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Cross-view Transformers for real-time Map-view Semantic Segmentation

About

We present cross-view transformers, an efficient attention-based model for map-view semantic segmentation from multiple cameras. Our architecture implicitly learns a mapping from individual camera views into a canonical map-view representation using a camera-aware cross-view attention mechanism. Each camera uses positional embeddings that depend on its intrinsic and extrinsic calibration. These embeddings allow a transformer to learn the mapping across different views without ever explicitly modeling it geometrically. The architecture consists of a convolutional image encoder for each view and cross-view transformer layers to infer a map-view semantic segmentation. Our model is simple, easily parallelizable, and runs in real-time. The presented architecture performs at state-of-the-art on the nuScenes dataset, with 4x faster inference speeds. Code is available at https://github.com/bradyz/cross_view_transformers.

Brady Zhou, Philipp Kr\"ahenb\"uhl• 2022

Related benchmarks

TaskDatasetResultRank
Semantic segmentationnuScenes (val)--
265
BEV Semantic SegmentationnuScenes (val)
Drivable Area IoU74.3
42
BeV vehicle segmentationnuScenes
Vehicle Segmentation IoU37.7
34
BEV segmentation (Vehicle)nuScenes v1.0-trainval (val)
Vehicle BEV IoU37.7
28
BeV SegmentationnuScenes v1.0 (val)
Drivable Area74.3
25
Map SegmentationnuScenes (val)
IoU (Drive)74.3
23
BeV SegmentationnuScenes (val)
Vehicle Segmentation Score36
16
BEV Pedestrian SegmentationnuScenes
BEV Pedestrian IoU0.142
15
Vehicle SegmentationnuScenes (val)
mIoU36
14
BeV vehicle segmentationnuScenes (val)
IoU (No Filter, 224x480)31.4
11
Showing 10 of 16 rows

Other info

Code

Follow for update