Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

4D-Former: Multimodal 4D Panoptic Segmentation

About

4D panoptic segmentation is a challenging but practically useful task that requires every point in a LiDAR point-cloud sequence to be assigned a semantic class label, and individual objects to be segmented and tracked over time. Existing approaches utilize only LiDAR inputs which convey limited information in regions with point sparsity. This problem can, however, be mitigated by utilizing RGB camera images which offer appearance-based information that can reinforce the geometry-based LiDAR features. Motivated by this, we propose 4D-Former: a novel method for 4D panoptic segmentation which leverages both LiDAR and image modalities, and predicts semantic masks as well as temporally consistent object masks for the input point-cloud sequence. We encode semantic classes and objects using a set of concise queries which absorb feature information from both data modalities. Additionally, we propose a learned mechanism to associate object tracks over time which reasons over both appearance and spatial location. We apply 4D-Former to the nuScenes and SemanticKITTI datasets where it achieves state-of-the-art results.

Ali Athar, Enxu Li, Sergio Casas, Raquel Urtasun• 2023

Related benchmarks

TaskDatasetResultRank
Semantic segmentationnuScenes (val)
mIoU (Segmentation)0.789
212
Semantic segmentationSemanticKITTI (val)
mIoU66.3
117
Semantic segmentationnuScenes (test)
mIoU80.4
75
Showing 3 of 3 rows

Other info

Follow for update