Visuo-Tactile Transformers for Manipulation
About
Learning representations in the joint domain of vision and touch can improve manipulation dexterity, robustness, and sample-complexity by exploiting mutual information and complementary cues. Here, we present Visuo-Tactile Transformers (VTTs), a novel multimodal representation learning approach suited for model-based reinforcement learning and planning. Our approach extends the Visual Transformer \cite{dosovitskiy2021image} to handle visuo-tactile feedback. Specifically, VTT uses tactile feedback together with self and cross-modal attention to build latent heatmap representations that focus attention on important task features in the visual domain. We demonstrate the efficacy of VTT for representation learning with a comparative evaluation against baselines on four simulated robot tasks and one real world block pushing task. We conduct an ablation study over the components of VTT to highlight the importance of cross-modality in representation learning.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Insertion | Simulation | Insertion Success Rate78.6 | 14 | |
| Insertion | Simulation Noisy | Success Rate0.634 | 7 | |
| Mobile Catch | Simulation | Success Rate53.3 | 7 | |
| Lift | Simulation Cylinder Shape | Success Rate69.4 | 7 | |
| Lift | Simulation | Success Rate70.4 | 7 | |
| Lift | Simulation Capsule Shape | Success Rate54.7 | 7 | |
| Block Rotate | Simulation | Success Rate1.3 | 7 | |
| Door | Simulation | Success Rate0.998 | 7 | |
| Dual Arm Lift | Simulation | Success Rate77.1 | 7 | |
| Pen Rotate | Simulation | Success Rate0.7 | 7 |