Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Mining Relations among Cross-Frame Affinities for Video Semantic Segmentation

About

The essence of video semantic segmentation (VSS) is how to leverage temporal information for prediction. Previous efforts are mainly devoted to developing new techniques to calculate the cross-frame affinities such as optical flow and attention. Instead, this paper contributes from a different angle by mining relations among cross-frame affinities, upon which better temporal information aggregation could be achieved. We explore relations among affinities in two aspects: single-scale intrinsic correlations and multi-scale relations. Inspired by traditional feature processing, we propose Single-scale Affinity Refinement (SAR) and Multi-scale Affinity Aggregation (MAA). To make it feasible to execute MAA, we propose a Selective Token Masking (STM) strategy to select a subset of consistent reference tokens for different scales when calculating affinities, which also improves the efficiency of our method. At last, the cross-frame affinities strengthened by SAR and MAA are adopted for adaptively aggregating temporal information. Our experiments demonstrate that the proposed method performs favorably against state-of-the-art VSS methods. The code is publicly available at https://github.com/GuoleiSun/VSS-MRCFA

Guolei Sun, Yun Liu, Hao Tang, Ajad Chhatkuli, Le Zhang, Luc Van Gool• 2022

Related benchmarks

TaskDatasetResultRank
Video Semantic SegmentationVSPW (val)
mIoU49.9
121
Video Semantic SegmentationCityscapes (val)
mIoU75.1
103
Video Semantic SegmentationVSPW
mIoU49.9
52
Video Semantic SegmentationCamVid
mIoU61.8
41
Video Semantic SegmentationNYU V2
mIoU46.7
27
Video Semantic SegmentationVSPW (test)
mIoU49.9
25
Video Semantic SegmentationCityscapes
mIoU75.1
8
Showing 7 of 7 rows

Other info

Code

Follow for update