Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DeVIS: Making Deformable Transformers Work for Video Instance Segmentation

About

Video Instance Segmentation (VIS) jointly tackles multi-object detection, tracking, and segmentation in video sequences. In the past, VIS methods mirrored the fragmentation of these subtasks in their architectural design, hence missing out on a joint solution. Transformers recently allowed to cast the entire VIS task as a single set-prediction problem. Nevertheless, the quadratic complexity of existing Transformer-based methods requires long training times, high memory requirements, and processing of low-single-scale feature maps. Deformable attention provides a more efficient alternative but its application to the temporal domain or the segmentation task have not yet been explored. In this work, we present Deformable VIS (DeVIS), a VIS method which capitalizes on the efficiency and performance of deformable Transformers. To reason about all VIS subtasks jointly over multiple frames, we present temporal multi-scale deformable attention with instance-aware object queries. We further introduce a new image and video instance mask head with multi-scale features, and perform near-online video processing with multi-cue clip tracking. DeVIS reduces memory as well as training time requirements, and achieves state-of-the-art results on the YouTube-VIS 2021, as well as the challenging OVIS dataset. Code is available at https://github.com/acaelles97/DeVIS.

Adri\`a Caelles, Tim Meinhardt, Guillem Bras\'o, Laura Leal-Taix\'e• 2022

Related benchmarks

TaskDatasetResultRank
Instance SegmentationCOCO 2017 (val)--
1144
Video Instance SegmentationYouTube-VIS 2019 (val)
AP57.1
567
Video Instance SegmentationYouTube-VIS 2021 (val)
AP54.4
344
Video Instance SegmentationOVIS (val)
AP34.6
301
Showing 4 of 4 rows

Other info

Code

Follow for update