RoboEngine: Plug-and-Play Robot Data Augmentation with Semantic Robot Segmentation and Background Generation
About
Visual augmentation has become a crucial technique for enhancing the visual robustness of imitation learning. However, existing methods are often limited by prerequisites such as camera calibration or the need for controlled environments (e.g., green screen setups). In this work, we introduce RoboEngine, the first plug-and-play visual robot data augmentation toolkit. For the first time, users can effortlessly generate physics- and task-aware robot scenes with just a few lines of code. To achieve this, we present a novel robot scene segmentation dataset, a generalizable high-quality robot segmentation model, and a fine-tuned background generation model, which together form the core components of the out-of-the-box toolkit. Using RoboEngine, we demonstrate the ability to generalize robot manipulation tasks across six entirely new scenes, based solely on demonstrations collected from a single scene, achieving a more than 200% performance improvement compared to the no-augmentation baseline. All datasets, model weights, and the toolkit are released https://roboengine.github.io/
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Robot Manipulation | UR-5e Triple-Factor Variation Suite (test) | PutCornPot Success Rate0.16 | 5 | |
| Robot Manipulation | AGX Triple-Factor Variation Suite (test) | AGX Success Rate (PutCornPlate)30 | 5 | |
| Robot Manipulation | TK (Tien Kung) Triple-Factor Variation Suite 2.0 (test) | TK2 WeightApple Success Rate0.38 | 5 | |
| Robotic Manipulation Generalization | LIBERO-Plus (test) | Background Success Rate80.6 | 3 | |
| Multi-view video generation | Droid 300 cases (test) | FID62.77 | 3 |