Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ICLR: In-Context Imitation Learning with Visual Reasoning

About

In-context imitation learning enables robots to adapt to new tasks from a small number of demonstrations without additional training. However, existing approaches typically condition only on state-action trajectories and lack explicit representations of task intent. This limitation hinders performance in complex and ambiguous task settings where the same actions may be consistent with different objectives. To address this, we present In-Context Imitation Learning with Visual Reasoning (ICLR), a novel framework that augments demonstration prompts with structured visual reasoning traces representing anticipated future robot trajectories in image space. ICLR also jointly learns to generate reasoning traces and low-level actions within a unified autoregressive transformer, enabling the model to mimic not only action prediction but also the reasoning process that leads to those actions. We extensively evaluate ICLR in both simulation and real-world manipulation tasks and demonstrate consistent improvements in success rates and generalization to unseen tasks and novel object configurations compared to other in-context imitation learning methods. These results suggest that incorporating embodied visual reasoning represents a promising direction for enhancing the robustness and generalization of robotic in-context learning systems.

Toan Nguyen, Weiduo Yuan, Songlin Wei, Hui Li, Daniel Seita, Yue Wang• 2026

Related benchmarks

TaskDatasetResultRank
PokingReal-world robot manipulation unseen tasks
Hippo Performance70
5
Robotic ManipulationLIBERO Object 90 unseen tasks (test)
Overall Success Rate (Object 90)70.89
5
Pick-&-PlaceReal-world robot manipulation unseen tasks
Success Rate: Dumpling to Red box65
5
Showing 3 of 3 rows

Other info

Follow for update