Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Visuo-Tactile Transformers for Manipulation

About

Learning representations in the joint domain of vision and touch can improve manipulation dexterity, robustness, and sample-complexity by exploiting mutual information and complementary cues. Here, we present Visuo-Tactile Transformers (VTTs), a novel multimodal representation learning approach suited for model-based reinforcement learning and planning. Our approach extends the Visual Transformer \cite{dosovitskiy2021image} to handle visuo-tactile feedback. Specifically, VTT uses tactile feedback together with self and cross-modal attention to build latent heatmap representations that focus attention on important task features in the visual domain. We demonstrate the efficacy of VTT for representation learning with a comparative evaluation against baselines on four simulated robot tasks and one real world block pushing task. We conduct an ablation study over the components of VTT to highlight the importance of cross-modality in representation learning.

Yizhou Chen, Andrea Sipos, Mark Van der Merwe, Nima Fazeli• 2022

Related benchmarks

TaskDatasetResultRank
Grasp Success PredictionGrasp dataset
Accuracy62.9
22
Tactile RecognitionTactile Cross-Domain OF Real to X Unseen target domains
Average ACC49.7
22
InsertionSimulation
Insertion Success Rate78.6
14
Tactile RecognitionTAG→OF-Real (test)
Accuracy55
12
Material Property RecognitionTAG (Touch-and-Go)
Category Accuracy (top-1)77
10
Object IdentificationObject Folder Real
Top-1 Accuracy83.6
10
InsertionSimulation Noisy
Success Rate0.634
7
Mobile CatchSimulation
Success Rate53.3
7
LiftSimulation Cylinder Shape
Success Rate69.4
7
LiftSimulation
Success Rate70.4
7
Showing 10 of 21 rows

Other info

Follow for update