Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Never-Ending Behavior-Cloning Agent for Robotic Manipulation

About

Relying on multi-modal observations, embodied robots (e.g., humanoid robots) could perform multiple robotic manipulation tasks in unstructured real-world environments. However, most language-conditioned behavior-cloning agents in robots still face existing long-standing challenges, i.e., 3D scene representation and human-level task learning, when adapting into a series of new tasks in practical scenarios. We here investigate these above challenges with NBAgent in embodied robots, a pioneering language-conditioned Never-ending Behavior-cloning Agent, which can continually learn observation knowledge of novel 3D scene semantics and robot manipulation skills from skill-shared and skill-specific attributes, respectively. Specifically, we propose a skill-shared semantic rendering module and a skill-shared representation distillation module to effectively learn 3D scene semantics from skill-shared attribute, further tackling 3D scene representation overlooking. Meanwhile, we establish a skill-specific evolving planner to perform manipulation knowledge decoupling, which can continually embed novel skill-specific knowledge like human from latent and low-rank space. Finally, we design a never-ending embodied robot manipulation benchmark, and expensive experiments demonstrate the significant performance of our method.

Wenqi Liang, Gan Sun, Yao He, Yu Ren, Jiahua Dong, Yang Cong• 2024

Related benchmarks

TaskDatasetResultRank
Embodied NavigationLENL 1.0 (test)
Success Rate (S1)29
13
Showing 1 of 1 rows

Other info

Follow for update