Shadow: Leveraging Segmentation Masks for Cross-Embodiment Policy Transfer
About
Data collection in robotics is spread across diverse hardware, and this variation will increase as new hardware is developed. Effective use of this growing body of data requires methods capable of learning from diverse robot embodiments. We consider the setting of training a policy using expert trajectories from a single robot arm (the source), and evaluating on a different robot arm for which no data was collected (the target). We present a data editing scheme termed Shadow, in which the robot during training and evaluation is replaced with a composite segmentation mask of the source and target robots. In this way, the input data distribution at train and test time match closely, enabling robust policy transfer to the new unseen robot while being far more data efficient than approaches that require co-training on large amounts of data from diverse embodiments. We demonstrate that an approach as simple as Shadow is effective both in simulation on varying tasks and robots, and on real robot hardware, where Shadow demonstrates an average of over 2x improvement in success rate compared to the strongest baseline.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Place Cans in Plasticbox | Real-World xArm7 to Franka Panda | Success Rate5 | 4 | |
| Lift Pot | RoboTwin Simulation UR5 to Franka Panda 14 | Success Rate2 | 4 | |
| LR | Real-World xArm7 to Franka Panda | Success Rate0.1 | 4 | |
| Place Cans in Plasticbox | RoboTwin Simulation UR5 to Franka Panda 14 | Success Rate2.3 | 4 | |
| Stack bowls | RoboTwin Simulation UR5 to Franka Panda 14 | Success Rate6 | 4 | |
| Stack bowls | Real-World xArm7 to Franka Panda | Success Rate1 | 4 |