Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Zero-Shot Category-Level Object Pose Estimation

About

Object pose estimation is an important component of most vision pipelines for embodied agents, as well as in 3D vision more generally. In this paper we tackle the problem of estimating the pose of novel object categories in a zero-shot manner. This extends much of the existing literature by removing the need for pose-labelled datasets or category-specific CAD models for training or inference. Specifically, we make the following contributions. First, we formalise the zero-shot, category-level pose estimation problem and frame it in a way that is most applicable to real-world embodied agents. Secondly, we propose a novel method based on semantic correspondences from a self-supervised vision transformer to solve the pose estimation problem. We further re-purpose the recent CO3D dataset to present a controlled and realistic test setting. Finally, we demonstrate that all baselines for our proposed task perform poorly, and show that our method provides a six-fold improvement in average rotation accuracy at 30 degrees. Our code is available at https://github.com/applied-ai-lab/zero-shot-pose.

Walter Goodwin, Sagar Vaze, Ioannis Havoutis, Ingmar Posner• 2022

Related benchmarks

TaskDatasetResultRank
3D Object Pose EstimationPASCAL3D+
Aeroplane Accuracy61.7
5
3D Pose EstimationPASCAL3D+
Bicycle 30° Acc61.7
4
Unsupervised AlignmentCo3D (test)
Backpack Count/Score44
4
3D Pose EvaluationObjectNet3D
Accuracy (30°)42.2
3
Showing 4 of 4 rows

Other info

Follow for update