Depth-PC: A Visual Servo Framework Integrated with Cross-Modality Fusion for Sim2Real Transfer
About
Visual servoing techniques guide robotic motion using visual information to accomplish manipulation tasks, requiring high precision and robustness against noise. Traditional methods often require prior knowledge and are susceptible to external disturbances. Learning-driven alternatives, while promising, frequently struggle with the scarcity of training data and fall short in generalization. To address these challenges, we propose Depth-PC, a novel visual servoing framework that leverages decoupled simulation-based training from real-world inference, achieving zero-shot Sim2Real transfer for servo tasks. To exploit spatial and geometric information of depth and point cloud features, we introduce cross-modal feature fusion, a first in servo tasks, followed by a dedicated Graph Neural Network to establish keypoint correspondences. Through simulation and real-world experiments, our approach demonstrates superior convergence basin and accuracy compared to SOTA methods, fulfilling the requirements for robotic servo tasks while enabling zero-shot Sim2Real transfer. In addition to the enhancements achieved with our proposed framework, we have also demonstrated the effectiveness of cross-modality feature fusion within the realm of servo tasks. Code is available at https://github.com/3nnui/Depth-PC.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Servoing | E_YCB zero-shot YCB-Video | SR100 | 17 | |
| Visual Servoing | VOC 2012 (test) | SR (%)99.33 | 4 | |
| Visual Servoing | Real-world environment Scene-Easy | Success Rate60 | 3 | |
| Visual Servoing | Real-world environment Scene-Hard | SR0.58 | 3 | |
| Visual Servoing | Real-world environment Scene-Depth | Success Rate (SR)60 | 3 | |
| Visual Servoing | Real-world environment Base-Low | SR62.33 | 3 | |
| Visual Servoing | Real-world environment Base-High | SR58.33 | 3 |