Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Motion-inductive Self-supervised Object Discovery in Videos

About

In this paper, we consider the task of unsupervised object discovery in videos. Previous works have shown promising results via processing optical flows to segment objects. However, taking flow as input brings about two drawbacks. First, flow cannot capture sufficient cues when objects remain static or partially occluded. Second, it is challenging to establish temporal coherency from flow-only input, due to the missing texture information. To tackle these limitations, we propose a model for directly processing consecutive RGB frames, and infer the optical flow between any pair of frames using a layered representation, with the opacity channels being treated as the segmentation. Additionally, to enforce object permanence, we apply temporal consistency loss on the inferred masks from randomly-paired frames, which refer to the motions at different paces, and encourage the model to segment the objects even if they may not move at the current time point. Experimentally, we demonstrate superior performance over previous state-of-the-art methods on three public video segmentation datasets (DAVIS2016, SegTrackv2, and FBMS-59), while being computationally efficient by avoiding the overhead of computing optical flow as input.

Shuangrui Ding, Weidi Xie, Yabo Chen, Rui Qian, Xiaopeng Zhang, Hongkai Xiong, Qi Tian• 2022

Related benchmarks

TaskDatasetResultRank
Unsupervised Video Object SegmentationDAVIS 2016 (val)--
108
Unsupervised Video Object SegmentationSegTrack v2
Jaccard Score69.4
56
Video Object SegmentationDAVIS 2016
J-Measure79.2
44
Unsupervised Video Object SegmentationFBMS59
Jaccard Score66.9
43
Video Object SegmentationSegTrack v2--
34
Single Object Video SegmentationSegTrack v2 (val)
J Mean69.4
27
Unsupervised Video Object SegmentationFBMS-59 (test)
J Score66.9
17
Video Object SegmentationFBMS-59
J (Mean)66.9
11
Showing 8 of 8 rows

Other info

Follow for update