Video Object Segmentation with Joint Re-identification and Attention-Aware Mask Propagation
About
The problem of video object segmentation can become extremely challenging when multiple instances co-exist. While each instance may exhibit large scale and pose variations, the problem is compounded when instances occlude each other causing failures in tracking. In this study, we formulate a deep recurrent network that is capable of segmenting and tracking objects in video simultaneously by their temporal continuity, yet able to re-identify them when they re-appear after a prolonged occlusion. We combine both temporal propagation and re-identification functionalities into a single framework that can be trained end-to-end. In particular, we present a re-identification module with template expansion to retrieve missing objects despite their large appearance changes. In addition, we contribute a new attention-based recurrent mask propagation approach that is robust to distractors not belonging to the target segment. Our approach achieves a new state-of-the-art global mean (Region Jaccard and Boundary F measure) of 68.2 on the challenging DAVIS 2017 benchmark (test-dev set), outperforming the winning solution which achieves a global mean of 66.1 on the same partition.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Video Object Segmentation | DAVIS 2017 (val) | J mean67.3 | 1130 | |
| Video Object Segmentation | DAVIS 2016 (val) | J Mean86.2 | 564 | |
| Video Object Segmentation | DAVIS 2017 (test-dev) | Region J Mean65.8 | 237 | |
| Video Object Segmentation | SegTrack v2 | -- | 34 | |
| Video Object Segmentation | DAVIS 2017 (dev) | J&F Mean68.2 | 8 | |
| Video Object Segmentation | DAVIS 2018 (test-challenge) | J&F Mean73.8 | 7 |