Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Robot Learning with Sensorimotor Pre-training

About

We present a self-supervised sensorimotor pre-training approach for robotics. Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens. Given a sequence of camera images, proprioceptive robot states, and actions, we encode the sequence into tokens, mask out a subset, and train a model to predict the missing content from the rest. We hypothesize that if a robot can predict the masked-out content it will have acquired a good model of the physical world that can enable it to act. RPT is designed to operate on latent visual representations which makes prediction tractable, enables scaling to larger models, and allows fast inference on a real robot. To evaluate our approach, we collected a dataset of 20,000 real-world trajectories over 9 months using a combination of motion planning and grasping algorithms. We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.

Ilija Radosavovic, Baifeng Shi, Letian Fu, Ken Goldberg, Trevor Darrell, Jitendra Malik• 2023

Related benchmarks

TaskDatasetResultRank
Robotic ManipulationFranka-Kitchen
Avg Success Rate88.5
24
Visuomotor ControlBlock Pushing
Avg Successes52
13
Visuomotor ControlLIBERO Goal
Success Rate17
13
Visuomotor ControlPush T
Success Rate56
12
Showing 4 of 4 rows

Other info

Follow for update