Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SSTVOS: Sparse Spatiotemporal Transformers for Video Object Segmentation

About

In this paper we introduce a Transformer-based approach to video object segmentation (VOS). To address compounding error and scalability issues of prior work, we propose a scalable, end-to-end method for VOS called Sparse Spatiotemporal Transformers (SST). SST extracts per-pixel representations for each object in a video using sparse attention over spatiotemporal features. Our attention-based formulation for VOS allows a model to learn to attend over a history of multiple frames and provides suitable inductive bias for performing correspondence-like computations necessary for solving motion segmentation. We demonstrate the effectiveness of attention-based over recurrent networks in the spatiotemporal domain. Our method achieves competitive results on YouTube-VOS and DAVIS 2017 with improved scalability and robustness to occlusions compared with the state of the art. Code is available at https://github.com/dukebw/SSTVOS.

Brendan Duke, Abdalla Ahmed, Christian Wolf, Parham Aarabi, Graham W. Taylor• 2021

Related benchmarks

TaskDatasetResultRank
Video Object SegmentationDAVIS 2017 (val)
J mean79.9
1130
Video Object SegmentationYouTube-VOS 2018 (val)
J Score (Seen)81.2
493
Video Object SegmentationYouTube-VOS 2019 (val)
J-Score (Seen)80.9
231
Audio-Visual SegmentationAVSBench S4 v1 (test)
MJ66.3
55
Audio-Visual SegmentationAVSBench MS3 (test)
Jaccard Index (IoU)42.6
30
Sound Target SegmentationAVSBench-object MS3 1.0 (test)
mIoU42.6
23
Audio-Visual SegmentationAVSBench S4 (test)--
16
Audio-Visual SegmentationAVS-Object-Single
J&F Score73.2
13
Audio-Visual SegmentationAVS-Object-Multi
J&F Score49.9
13
Showing 9 of 9 rows

Other info

Code

Follow for update