Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Open-World Object Manipulation using Pre-trained Vision-Language Models

About

For robots to follow instructions from people, they must be able to connect the rich semantic information in human vocabulary, e.g. "can you get me the pink stuffed whale?" to their sensory observations and actions. This brings up a notably difficult challenge for robots: while robot learning approaches allow robots to learn many different behaviors from first-hand experience, it is impractical for robots to have first-hand experiences that span all of this semantic information. We would like a robot's policy to be able to perceive and pick up the pink stuffed whale, even if it has never seen any data interacting with a stuffed whale before. Fortunately, static data on the internet has vast semantic information, and this information is captured in pre-trained vision-language models. In this paper, we study whether we can interface robot policies with these pre-trained models, with the aim of allowing robots to complete instructions involving object categories that the robot has never seen first-hand. We develop a simple approach, which we call Manipulation of Open-World Objects (MOO), which leverages a pre-trained vision-language model to extract object-identifying information from the language command and image, and conditions the robot policy on the current image, the instruction, and the extracted object information. In a variety of experiments on a real mobile manipulator, we find that MOO generalizes zero-shot to a wide range of novel object categories and environments. In addition, we show how MOO generalizes to other, non-language-based input modalities to specify the object of interest such as finger pointing, and how it can be further extended to enable open-world navigation and manipulation. The project's website and evaluation videos can be found at https://robot-moo.github.io/

Austin Stone, Ted Xiao, Yao Lu, Keerthana Gopalakrishnan, Kuang-Huei Lee, Quan Vuong, Paul Wohlhart, Sean Kirmani, Brianna Zitkovich, Fei Xia, Chelsea Finn, Karol Hausman• 2023

Related benchmarks

TaskDatasetResultRank
Robotic ManipulationReal-world Robotic Manipulation (test)--
7
Button PressingButton Pressing Similar Textures
Success Rate85
6
Button PressingButton Pressing Similar Shapes
Success Rate81
6
Drink FetchingDrink Fetching Similar Textures
Success Rate66
6
Drink FetchingDrink Fetching Similar Shapes
Success Rate70
6
Rubbish DisposalRubbish Disposal Similar Textures
Success Rate72
6
Rubbish DisposalRubbish Disposal Similar Shapes
Success Rate75
6
Water PouringWater Pouring Similar Textures
Success Rate57
6
Water PouringWater Pouring Similar Shapes
Success Rate66
6
Showing 9 of 9 rows

Other info

Follow for update