Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Do As I Can, Not As I Say: Grounding Language in Robotic Affordances

About

Large language models can encode a wealth of semantic knowledge about the world. Such knowledge could be extremely useful to robots aiming to act upon high-level, temporally extended instructions expressed in natural language. However, a significant weakness of language models is that they lack real-world experience, which makes it difficult to leverage them for decision making within a given embodiment. For example, asking a language model to describe how to clean a spill might result in a reasonable narrative, but it may not be applicable to a particular agent, such as a robot, that needs to perform this task in a particular environment. We propose to provide real-world grounding by means of pretrained skills, which are used to constrain the model to propose natural language actions that are both feasible and contextually appropriate. The robot can act as the language model's "hands and eyes," while the language model supplies high-level semantic knowledge about the task. We show how low-level skills can be combined with large language models so that the language model provides high-level knowledge about the procedures for performing complex and temporally-extended instructions, while value functions associated with these skills provide the grounding necessary to connect this knowledge to a particular physical environment. We evaluate our method on a number of real-world robotic tasks, where we show the need for real-world grounding and that this approach is capable of completing long-horizon, abstract, natural language instructions on a mobile manipulator. The project's website and the video can be found at https://say-can.github.io/.

Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan, Andy Zeng• 2022

Related benchmarks

TaskDatasetResultRank
Continual Instruction FollowingALFRED
Success Rate (SR)45.67
28
Continual Instruction FollowingVirtualHome
SR35.12
15
Continual Instruction FollowingCARLA
Success Rate (SR)37.55
12
Robotic Task PlanningG-Dataset zero-shot
TSR42.5
9
Robotic Task PlanningR-Dataset zero-shot
TSR59
9
Interactive Science SimulationScienceWorld v1.0 (test)
Task 1-1 (L) Score33.06
8
Long-Horizon Robotics ManipulationPaint-block (Seen)
Success Rate67.2
8
Long-Horizon Robotics ManipulationPaint-block (Unseen)
Success Rate62.8
8
Long-Horizon Robotics ManipulationObject-arrange (Seen)
Success Rate70.3
8
Long-Horizon Robotics ManipulationObject-arrange (Unseen)
Success Rate66.9
8
Showing 10 of 13 rows

Other info

Follow for update