Motion-inductive Self-supervised Object Discovery in Videos
About
In this paper, we consider the task of unsupervised object discovery in videos. Previous works have shown promising results via processing optical flows to segment objects. However, taking flow as input brings about two drawbacks. First, flow cannot capture sufficient cues when objects remain static or partially occluded. Second, it is challenging to establish temporal coherency from flow-only input, due to the missing texture information. To tackle these limitations, we propose a model for directly processing consecutive RGB frames, and infer the optical flow between any pair of frames using a layered representation, with the opacity channels being treated as the segmentation. Additionally, to enforce object permanence, we apply temporal consistency loss on the inferred masks from randomly-paired frames, which refer to the motions at different paces, and encourage the model to segment the objects even if they may not move at the current time point. Experimentally, we demonstrate superior performance over previous state-of-the-art methods on three public video segmentation datasets (DAVIS2016, SegTrackv2, and FBMS-59), while being computationally efficient by avoiding the overhead of computing optical flow as input.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Unsupervised Video Object Segmentation | DAVIS 2016 (val) | -- | 108 | |
| Unsupervised Video Object Segmentation | SegTrack v2 | Jaccard Score69.4 | 56 | |
| Video Object Segmentation | DAVIS 2016 | J-Measure79.2 | 44 | |
| Unsupervised Video Object Segmentation | FBMS59 | Jaccard Score66.9 | 43 | |
| Video Object Segmentation | SegTrack v2 | -- | 34 | |
| Single Object Video Segmentation | SegTrack v2 (val) | J Mean69.4 | 27 | |
| Unsupervised Video Object Segmentation | FBMS-59 (test) | J Score66.9 | 17 | |
| Video Object Segmentation | FBMS-59 | J (Mean)66.9 | 11 |