Where2Act: From Pixels to Actions for Articulated 3D Objects
About
One of the fundamental goals of visual perception is to allow agents to meaningfully interact with their environment. In this paper, we take a step towards that long-term goal -- we extract highly localized actionable information related to elementary actions such as pushing or pulling for articulated objects with movable parts. For example, given a drawer, our network predicts that applying a pulling force on the handle opens the drawer. We propose, discuss, and evaluate novel network architectures that given image and depth data, predict the set of actions possible at each pixel, and the regions over articulated parts that are likely to move under the force. We propose a learning-from-interaction framework with an online data sampling strategy that allows us to train the network in simulation (SAPIEN) and generalizes across categories. Check the website for code and data release: https://cs.stanford.edu/~kaichun/where2act/
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| pushing | SAPIEN (test) | Sample Manipulation Accuracy34.76 | 8 | |
| pulling | SAPIEN (test) | Sample Manipulation Accuracy27.55 | 8 | |
| Pulling Affordance Prediction | SAPIEN PartNet-Mobility & ShapeNet (test) | F-Score66.42 | 7 | |
| Pulling Affordance Prediction | SAPIEN (PartNet-Mobility & ShapeNet) Novel (test) | F-Score60.37 | 7 | |
| Pushing Affordance Prediction | SAPIEN PartNet-Mobility & ShapeNet (test) | F-Score68.66 | 7 | |
| Pushing Affordance Prediction | SAPIEN (PartNet-Mobility & ShapeNet) Novel (test) | F-Score64.86 | 7 | |
| Articulated Object Manipulation | PartNet-mobility v1 (train) | Box6.8 | 6 | |
| Articulated Manipulation | ShapeNet PartNet-Mobility unseen objects | Bottle Success Rate2 | 6 | |
| Edge-Pushing | ShapeNet PartNet-Mobility unseen objects | Success Rate (Bowl)0.00e+0 | 6 | |
| Articulated Manipulation | ShapeNet PartNet-Mobility seen objects | Bottle Success Rate1 | 6 |