Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TransVOS: Video Object Segmentation with Transformers

About

Recently, Space-Time Memory Network (STM) based methods have achieved state-of-the-art performance in semi-supervised video object segmentation (VOS). A crucial problem in this task is how to model the dependency both among different frames and inside every frame. However, most of these methods neglect the spatial relationships (inside each frame) and do not make full use of the temporal relationships (among different frames). In this paper, we propose a new transformer-based framework, termed TransVOS, introducing a vision transformer to fully exploit and model both the temporal and spatial relationships. Moreover, most STM-based approaches employ two separate encoders to extract features of two significant inputs, i.e., reference sets (history frames with predicted masks) and query frame (current frame), respectively, increasing the models' parameters and complexity. To slim the popular two-encoder pipeline while keeping the effectiveness, we design a single two-path feature extractor to encode the above two inputs in a unified way. Extensive experiments demonstrate the superiority of our TransVOS over state-of-the-art methods on both DAVIS and YouTube-VOS datasets.

Jianbiao Mei, Mengmeng Wang, Yeneng Lin, Yi Yuan, Yong Liu• 2021

Related benchmarks

TaskDatasetResultRank
Video Object SegmentationDAVIS 2017 (val)
J mean81.4
1130
Video Object SegmentationDAVIS 2016 (val)--
564
Video Object SegmentationYouTube-VOS 2018 (val)
J Score (Seen)82
493
Video Object SegmentationDAVIS 2017 (test-dev)
Region J Mean73
237
Video Salient Object DetectionViSal
MAE0.021
42
Video Salient Object DetectionFBMS
F-beta Score (Fβ)0.886
31
Video Salient Object DetectionSeg V2
Sm81.6
18
Video Salient Object DetectionDAVIS '16
MAE0.018
17
Showing 8 of 8 rows

Other info

Follow for update