Few-Shot Viewpoint Estimation
About
Viewpoint estimation for known categories of objects has been improved significantly thanks to deep networks and large datasets, but generalization to unknown categories is still very challenging. With an aim towards improving performance on unknown categories, we introduce the problem of category-level few-shot viewpoint estimation. We design a novel framework to successfully train viewpoint networks for new categories with few examples (10 or less). We formulate the problem as one of learning to estimate category-specific 3D canonical shapes, their associated depth estimates, and semantic 2D keypoints. We apply meta-learning to learn weights for our network that are amenable to category-specific few-shot fine-tuning. Furthermore, we design a flexible meta-Siamese network that maximizes information sharing during meta-learning. Through extensive experimentation on the ObjectNet3D and Pascal3D+ benchmark datasets, we demonstrate that our framework, which we call MetaView, significantly outperforms fine-tuning the state-of-the-art models with few examples, and that the specific architectural innovations of our method are crucial to achieving good performance.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Joint Object Detection and Viewpoint Estimation | ObjectNet3D Intra-dataset | Bed Accuracy36 | 6 | |
| Joint Object Detection and Viewpoint Estimation | Pascal3D+ Inter-dataset | Aero Score0.12 | 6 | |
| Viewpoint Estimation | ObjectNet3D 20 novel classes Intra-dataset 10-shot | Acc (30 deg)48 | 5 | |
| Viewpoint Estimation | Pascal3D+ 12 novel classes | Accuracy (30 deg)33 | 5 |