Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Discovering Objects that Can Move

About

This paper studies the problem of object discovery -- separating objects from the background without manual labels. Existing approaches utilize appearance cues, such as color, texture, and location, to group pixels into object-like regions. However, by relying on appearance alone, these methods fail to separate objects from the background in cluttered scenes. This is a fundamental limitation since the definition of an object is inherently ambiguous and context-dependent. To resolve this ambiguity, we choose to focus on dynamic objects -- entities that can move independently in the world. We then scale the recent auto-encoder based frameworks for unsupervised object discovery from toy synthetic images to complex real-world scenes. To this end, we simplify their architecture, and augment the resulting model with a weak learning signal from general motion segmentation algorithms. Our experiments demonstrate that, despite only capturing a small subset of the objects that move, this signal is enough to generalize to segment both moving and static instances of dynamic objects. We show that our model scales to a newly collected, photo-realistic synthetic dataset with street driving scenarios. Additionally, we leverage ground truth segmentation and flow annotations in this dataset for thorough ablation and evaluation. Finally, our experiments on the real-world KITTI benchmark demonstrate that the proposed approach outperforms both heuristic- and learning-based methods by capitalizing on motion cues.

Zhipeng Bao, Pavel Tokmakov, Allan Jabri, Yu-Xiong Wang, Adrien Gaidon, Martial Hebert• 2022

Related benchmarks

TaskDatasetResultRank
Unsupervised Multi-object SegmentationKITTI
FG-ARI47.1
9
Object DiscoveryTRI-PD (val)
Fg. ARI0.509
6
Object DiscoveryKITTI (val)
Fg. ARI47.1
6
Object DiscoveryCATER original (test)
Fg. ARI90.4
6
Object DiscoveryTRI-PD (test)
Fg. ARI50.9
6
Showing 5 of 5 rows

Other info

Code

Follow for update