Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

RVT: Robotic View Transformer for 3D Object Manipulation

About

For 3D object manipulation, methods that build an explicit 3D representation perform better than those relying only on camera images. But using explicit 3D representations like voxels comes at large computing cost, adversely affecting scalability. In this work, we propose RVT, a multi-view transformer for 3D manipulation that is both scalable and accurate. Some key features of RVT are an attention mechanism to aggregate information across views and re-rendering of the camera input from virtual views around the robot workspace. In simulations, we find that a single RVT model works well across 18 RLBench tasks with 249 task variations, achieving 26% higher relative success than the existing state-of-the-art method (PerAct). It also trains 36X faster than PerAct for achieving the same performance and achieves 2.3X the inference speed of PerAct. Further, RVT can perform a variety of manipulation tasks in the real world with just a few ($\sim$10) demonstrations per task. Visual results, code, and trained model are provided at https://robotic-view-transformer.github.io/.

Ankit Goyal, Jie Xu, Yijie Guo, Valts Blukis, Yu-Wei Chao, Dieter Fox• 2023

Related benchmarks

TaskDatasetResultRank
Robotic ManipulationRLBench
Avg Success Score62.9
56
Robotic ManipulationRLBench (test)
Average Success Rate62.9
34
Multi-task Robotic ManipulationRLBench
Avg Success Rate65
16
Embodied ManipulationRLBench Few-shot Adaptation Tasks (test)
Meat on Grill Success Rate80
12
Robotic ManipulationRLBench 18Task
Average Success Rate62.9
9
Robotic ManipulationCOLOSSEUM
Avg SR3.54e+3
7
Robotic ManipulationRLBench multi-task
Average Success Rate62.9
7
3D keyframe-based behavior cloningRLBench
Average Rank3.6
5
Robot ManipulationRLBench 100
Close Jar522.5
4
Close DrawerReal-world Evaluation 1.0 (unseen object placements)
Success Rate50
4
Showing 10 of 18 rows

Other info

Code

Follow for update