Learning Referring Video Object Segmentation from Weak Annotation
About
Referring video object segmentation (RVOS) is a task that aims to segment the target object in all video frames based on a sentence describing the object. Although existing RVOS methods have achieved significant performance, they depend on densely-annotated datasets, which are expensive and time-consuming to obtain. In this paper, we propose a new annotation scheme that reduces the annotation effort by 8 times, while providing sufficient supervision for RVOS. Our scheme only requires a mask for the frame where the object first appears and bounding boxes for the rest of the frames. Based on this scheme, we develop a novel RVOS method that exploits weak annotations effectively. Specifically, we build a simple but effective baseline model, SimRVOS, for RVOS with weak annotation. Then, we design a cross frame segmentation module, which uses the language-guided dynamic filters from one frame to segment the target object in other frames to thoroughly leverage the valuable mask annotation and bounding boxes. Finally, we develop a bi-level contrastive learning method to enhance the pixel-level discriminative representation of the model with weak annotation. We conduct extensive experiments to show that our method achieves comparable or even superior performance to fully-supervised methods, without requiring dense mask annotations.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Referring Video Object Segmentation | Ref-YouTube-VOS (val) | J&F Score46.6 | 200 | |
| Referring Video Object Segmentation | Ref-DAVIS 2017 (val) | J&F47.3 | 178 | |
| Referring Video Object Segmentation | JHMDB Sentences (test) | Overall IoU0.632 | 83 | |
| Referring Video Object Segmentation | A2D-Sentences (val) | Overall IoU66.3 | 11 |