Cross-view Transformers for real-time Map-view Semantic Segmentation
About
We present cross-view transformers, an efficient attention-based model for map-view semantic segmentation from multiple cameras. Our architecture implicitly learns a mapping from individual camera views into a canonical map-view representation using a camera-aware cross-view attention mechanism. Each camera uses positional embeddings that depend on its intrinsic and extrinsic calibration. These embeddings allow a transformer to learn the mapping across different views without ever explicitly modeling it geometrically. The architecture consists of a convolutional image encoder for each view and cross-view transformer layers to infer a map-view semantic segmentation. Our model is simple, easily parallelizable, and runs in real-time. The presented architecture performs at state-of-the-art on the nuScenes dataset, with 4x faster inference speeds. Code is available at https://github.com/bradyz/cross_view_transformers.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Semantic segmentation | nuScenes (val) | -- | 212 | |
| BEV Semantic Segmentation | nuScenes (val) | Drivable Area IoU74.3 | 28 | |
| BEV segmentation (Vehicle) | nuScenes v1.0-trainval (val) | Vehicle BEV IoU37.7 | 28 | |
| BeV Segmentation | nuScenes v1.0 (val) | Drivable Area74.3 | 25 | |
| Map Segmentation | nuScenes (val) | IoU (Drive)74.3 | 23 | |
| BeV Segmentation | nuScenes (val) | Vehicle Segmentation Score36 | 16 | |
| Vehicle Segmentation | nuScenes (val) | mIoU36 | 14 | |
| BeV vehicle segmentation | nuScenes (val) | IoU (No Filter, 224x480)31.4 | 11 | |
| Map-view Semantic Segmentation | Argoverse (val) | Vehicle IoU35.2 | 9 | |
| Vehicle map-view segmentation | nuScenes | mIoU37.5 | 8 |