Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

RoboAgent: Chaining Basic Capabilities for Embodied Task Planning

About

This paper focuses on embodied task planning, where an agent acquires visual observations from the environment and executes atomic actions to accomplish a given task. Although recent Vision-Language Models (VLMs) have achieved impressive results in multimodal understanding and reasoning, their performance remains limited when applied to embodied planning that involves multi-turn interaction, long-horizon reasoning, and extended context analysis. To bridge this gap, we propose RoboAgent, a capability-driven planning pipeline in which the model actively invokes different sub-capabilities. Each capability maintains its own context, and produces intermediate reasoning results or interacts with the environment according to the query given by a scheduler. This framework decomposes complex planning into a sequence of basic vision-language problems that VLMs can better address, enabling a more transparent and controllable reasoning process. The scheduler and all capabilities are implemented with a single VLM, without relying on external tools. To train this VLM, we adopt a multi-stage paradigm that consists of: (1) behavior cloning with expert plans, (2) DAgger training using trajectories collected by the model, and (3) reinforcement learning guided by an expert policy. Across these stages, we exploit the internal information of the environment simulator to construct high-quality supervision for each capability, and we further introduce augmented and synthetic data to enhance the model's performance in more diverse scenarios. Extensive experiments on widely used embodied task planning benchmarks validate the effectiveness of the proposed approach. Our codes will be available at https://github.com/woyut/RoboAgent_CVPR26.

Peiran Xu, Jiaqi Zheng, Yadong Mu• 2026

Related benchmarks

TaskDatasetResultRank
Embodied AI Task PlanningEB-ALFRED
Average Score67
28
Embodied Task PlanningALFWorld visual observation
Avg Success Rate77.6
10
Embodied Task PlanningALFWorld textual observation (unseen)
Success Rate94
9
Embodied Task PlanningALFWorld textual observation (seen)
Success Rate92.1
9
Embodied Task PlanningEB-Habitat (OOD)
Success Rate22.3
6
Subgoal PlanningLoTa-WAH OOD
SSR22.1
5
Showing 6 of 6 rows

Other info

Follow for update