Visuo-Tactile Transformers for Manipulation
About
Learning representations in the joint domain of vision and touch can improve manipulation dexterity, robustness, and sample-complexity by exploiting mutual information and complementary cues. Here, we present Visuo-Tactile Transformers (VTTs), a novel multimodal representation learning approach suited for model-based reinforcement learning and planning. Our approach extends the Visual Transformer \cite{dosovitskiy2021image} to handle visuo-tactile feedback. Specifically, VTT uses tactile feedback together with self and cross-modal attention to build latent heatmap representations that focus attention on important task features in the visual domain. We demonstrate the efficacy of VTT for representation learning with a comparative evaluation against baselines on four simulated robot tasks and one real world block pushing task. We conduct an ablation study over the components of VTT to highlight the importance of cross-modality in representation learning.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Grasp Success Prediction | Grasp dataset | Accuracy62.9 | 22 | |
| Tactile Recognition | Tactile Cross-Domain OF Real to X Unseen target domains | Average ACC49.7 | 22 | |
| Insertion | Simulation | Insertion Success Rate78.6 | 14 | |
| Tactile Recognition | TAG→OF-Real (test) | Accuracy55 | 12 | |
| Material Property Recognition | TAG (Touch-and-Go) | Category Accuracy (top-1)77 | 10 | |
| Object Identification | Object Folder Real | Top-1 Accuracy83.6 | 10 | |
| Insertion | Simulation Noisy | Success Rate0.634 | 7 | |
| Mobile Catch | Simulation | Success Rate53.3 | 7 | |
| Lift | Simulation Cylinder Shape | Success Rate69.4 | 7 | |
| Lift | Simulation | Success Rate70.4 | 7 |