Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Perceiver-Actor: A Multi-Task Transformer for Robotic Manipulation

About

Transformers have revolutionized vision and natural language processing with their ability to scale with large datasets. But in robotic manipulation, data is both limited and expensive. Can manipulation still benefit from Transformers with the right problem formulation? We investigate this question with PerAct, a language-conditioned behavior-cloning agent for multi-task 6-DoF manipulation. PerAct encodes language goals and RGB-D voxel observations with a Perceiver Transformer, and outputs discretized actions by ``detecting the next best voxel action''. Unlike frameworks that operate on 2D images, the voxelized 3D observation and action space provides a strong structural prior for efficiently learning 6-DoF actions. With this formulation, we train a single multi-task Transformer for 18 RLBench tasks (with 249 variations) and 7 real-world tasks (with 18 variations) from just a few demonstrations per task. Our results show that PerAct significantly outperforms unstructured image-to-action agents and 3D ConvNet baselines for a wide range of tabletop tasks.

Mohit Shridhar, Lucas Manuelli, Dieter Fox• 2022

Related benchmarks

TaskDatasetResultRank
Robotic ManipulationRLBench
Avg Success Score49.4
56
Robotic ManipulationRLBench (test)
Average Success Rate49.4
34
Multi-task Robotic ManipulationRLBench
Avg Success Rate52.3
16
Robotic ManipulationRLBench standard (test)
Reach Target Success Rate100
12
Robot ManipulationRLBench Moderate Shift
Average Success Rate9.3
11
close jarRLBench
Success Rate60
10
meat off grillRLBench
Success Rate84
10
open drawerRLBench
Success Rate80
10
put in drawerRLBench
Success Rate68
10
Robot ManipulationRLBench Large Shift
Rel. Drop (Avg)-16.1
10
Showing 10 of 40 rows

Other info

Code

Follow for update