SSTVOS: Sparse Spatiotemporal Transformers for Video Object Segmentation
About
In this paper we introduce a Transformer-based approach to video object segmentation (VOS). To address compounding error and scalability issues of prior work, we propose a scalable, end-to-end method for VOS called Sparse Spatiotemporal Transformers (SST). SST extracts per-pixel representations for each object in a video using sparse attention over spatiotemporal features. Our attention-based formulation for VOS allows a model to learn to attend over a history of multiple frames and provides suitable inductive bias for performing correspondence-like computations necessary for solving motion segmentation. We demonstrate the effectiveness of attention-based over recurrent networks in the spatiotemporal domain. Our method achieves competitive results on YouTube-VOS and DAVIS 2017 with improved scalability and robustness to occlusions compared with the state of the art. Code is available at https://github.com/dukebw/SSTVOS.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Video Object Segmentation | DAVIS 2017 (val) | J mean79.9 | 1130 | |
| Video Object Segmentation | YouTube-VOS 2018 (val) | J Score (Seen)81.2 | 493 | |
| Video Object Segmentation | YouTube-VOS 2019 (val) | J-Score (Seen)80.9 | 231 | |
| Audio-Visual Segmentation | AVSBench S4 v1 (test) | MJ66.3 | 55 | |
| Audio-Visual Segmentation | AVSBench MS3 (test) | Jaccard Index (IoU)42.6 | 30 | |
| Sound Target Segmentation | AVSBench-object MS3 1.0 (test) | mIoU42.6 | 23 | |
| Audio-Visual Segmentation | AVSBench S4 (test) | -- | 16 | |
| Audio-Visual Segmentation | AVS-Object-Single | J&F Score73.2 | 13 | |
| Audio-Visual Segmentation | AVS-Object-Multi | J&F Score49.9 | 13 |